Test Report: Docker_Linux_crio 21932

                    
                      84a896b9ca11c6987b6528b1f6e82b411b2540e2:2025-11-24:42492
                    
                

Test fail (37/328)

Order failed test Duration
29 TestAddons/serial/Volcano 0.25
35 TestAddons/parallel/Registry 14.63
36 TestAddons/parallel/RegistryCreds 0.41
37 TestAddons/parallel/Ingress 148.22
38 TestAddons/parallel/InspektorGadget 5.28
39 TestAddons/parallel/MetricsServer 5.3
41 TestAddons/parallel/CSI 34.47
42 TestAddons/parallel/Headlamp 2.58
43 TestAddons/parallel/CloudSpanner 5.25
44 TestAddons/parallel/LocalPath 9.12
45 TestAddons/parallel/NvidiaDevicePlugin 6.26
46 TestAddons/parallel/Yakd 6.24
47 TestAddons/parallel/AmdGpuDevicePlugin 6.25
97 TestFunctional/parallel/ServiceCmdConnect 602.72
117 TestFunctional/parallel/ServiceCmd/DeployApp 600.59
137 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.85
138 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.86
139 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.26
140 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.2
143 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
153 TestFunctional/parallel/ServiceCmd/Format 0.53
154 TestFunctional/parallel/ServiceCmd/URL 0.52
191 TestJSONOutput/pause/Command 2.29
197 TestJSONOutput/unpause/Command 1.79
267 TestPause/serial/Pause 6.26
300 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.21
302 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.18
313 TestStartStop/group/no-preload/serial/Pause 6.21
319 TestStartStop/group/old-k8s-version/serial/Pause 6.27
324 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.09
329 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.12
332 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.58
337 TestStartStop/group/newest-cni/serial/Pause 5.84
356 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.2
363 TestStartStop/group/embed-certs/serial/Pause 7.26
x
+
TestAddons/serial/Volcano (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable volcano --alsologtostderr -v=1: exit status 11 (250.543281ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:15:51.262738  360904 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:15:51.263037  360904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:15:51.263048  360904 out.go:374] Setting ErrFile to fd 2...
	I1124 13:15:51.263055  360904 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:15:51.263278  360904 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:15:51.263582  360904 mustload.go:66] Loading cluster: addons-715644
	I1124 13:15:51.263930  360904 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:51.263950  360904 addons.go:622] checking whether the cluster is paused
	I1124 13:15:51.264055  360904 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:15:51.264071  360904 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:15:51.264463  360904 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:15:51.281909  360904 ssh_runner.go:195] Run: systemctl --version
	I1124 13:15:51.281968  360904 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:15:51.297841  360904 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:15:51.395812  360904 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:15:51.395923  360904 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:15:51.424023  360904 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:15:51.424050  360904 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:15:51.424055  360904 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:15:51.424058  360904 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:15:51.424061  360904 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:15:51.424065  360904 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:15:51.424068  360904 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:15:51.424071  360904 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:15:51.424074  360904 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:15:51.424083  360904 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:15:51.424089  360904 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:15:51.424092  360904 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:15:51.424094  360904 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:15:51.424098  360904 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:15:51.424101  360904 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:15:51.424120  360904 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:15:51.424127  360904 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:15:51.424131  360904 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:15:51.424134  360904 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:15:51.424137  360904 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:15:51.424139  360904 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:15:51.424142  360904 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:15:51.424144  360904 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:15:51.424147  360904 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:15:51.424150  360904 cri.go:89] found id: ""
	I1124 13:15:51.424189  360904 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:15:51.437427  360904 out.go:203] 
	W1124 13:15:51.438549  360904 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:15:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:15:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:15:51.438563  360904 out.go:285] * 
	* 
	W1124 13:15:51.442508  360904 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:15:51.443578  360904 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.25s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.63s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 3.243233ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003263909s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003082868s
addons_test.go:392: (dbg) Run:  kubectl --context addons-715644 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-715644 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-715644 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.152636043s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 ip
2025/11/24 13:16:14 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable registry --alsologtostderr -v=1: exit status 11 (260.783756ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:14.622479  363665 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:14.622711  363665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:14.622720  363665 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:14.622725  363665 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:14.622951  363665 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:14.623214  363665 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:14.623525  363665 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:14.623540  363665 addons.go:622] checking whether the cluster is paused
	I1124 13:16:14.623702  363665 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:14.623720  363665 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:14.624216  363665 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:14.642053  363665 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:14.642129  363665 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:14.658973  363665 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:14.761297  363665 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:14.761384  363665 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:14.791983  363665 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:14.792009  363665 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:14.792015  363665 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:14.792020  363665 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:14.792032  363665 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:14.792038  363665 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:14.792042  363665 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:14.792047  363665 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:14.792052  363665 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:14.792063  363665 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:14.792071  363665 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:14.792076  363665 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:14.792080  363665 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:14.792085  363665 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:14.792092  363665 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:14.792105  363665 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:14.792113  363665 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:14.792118  363665 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:14.792121  363665 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:14.792124  363665 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:14.792127  363665 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:14.792129  363665 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:14.792132  363665 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:14.792135  363665 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:14.792137  363665 cri.go:89] found id: ""
	I1124 13:16:14.792172  363665 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:14.807730  363665 out.go:203] 
	W1124 13:16:14.808990  363665 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:14.809020  363665 out.go:285] * 
	* 
	W1124 13:16:14.812983  363665 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:14.814376  363665 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (14.63s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.234847ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-715644
addons_test.go:332: (dbg) Run:  kubectl --context addons-715644 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (246.303835ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:14.805495  363731 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:14.805838  363731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:14.805850  363731 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:14.805855  363731 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:14.806215  363731 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:14.806621  363731 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:14.807730  363731 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:14.807784  363731 addons.go:622] checking whether the cluster is paused
	I1124 13:16:14.808136  363731 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:14.808161  363731 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:14.808582  363731 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:14.827621  363731 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:14.827689  363731 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:14.844075  363731 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:14.942842  363731 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:14.942946  363731 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:14.970316  363731 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:14.970337  363731 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:14.970341  363731 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:14.970345  363731 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:14.970348  363731 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:14.970362  363731 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:14.970365  363731 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:14.970367  363731 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:14.970370  363731 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:14.970381  363731 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:14.970387  363731 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:14.970390  363731 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:14.970392  363731 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:14.970395  363731 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:14.970398  363731 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:14.970403  363731 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:14.970408  363731 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:14.970413  363731 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:14.970416  363731 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:14.970418  363731 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:14.970421  363731 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:14.970424  363731 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:14.970426  363731 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:14.970429  363731 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:14.970431  363731 cri.go:89] found id: ""
	I1124 13:16:14.970474  363731 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:14.983447  363731 out.go:203] 
	W1124 13:16:14.984559  363731 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:14.984579  363731 out.go:285] * 
	* 
	W1124 13:16:14.988839  363731 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:14.990030  363731 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.41s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (148.22s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-715644 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-715644 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-715644 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f11521cd-9336-42e0-ae95-a8fe32cb9d5b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f11521cd-9336-42e0-ae95-a8fe32cb9d5b] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.002753092s
I1124 13:16:22.177531  351593 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m14.894110648s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-715644 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-715644
helpers_test.go:243: (dbg) docker inspect addons-715644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768",
	        "Created": "2025-11-24T13:14:06.670171194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353602,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:14:06.70214882Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/hosts",
	        "LogPath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768-json.log",
	        "Name": "/addons-715644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-715644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-715644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768",
	                "LowerDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-715644",
	                "Source": "/var/lib/docker/volumes/addons-715644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-715644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-715644",
	                "name.minikube.sigs.k8s.io": "addons-715644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "984b63ae3b6998d3d34778038e297e50dea66a43df6a0c148ff497e76e3d0173",
	            "SandboxKey": "/var/run/docker/netns/984b63ae3b69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-715644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "812ceb5bd489f198e66065cdde08eb9cb5e60b9e51f6b0e99123e2b983afdcf3",
	                    "EndpointID": "b4c7a61c550f4214a83e09cbe3642a6ccc1a5a993b98e97ce5be544c4a3081a7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0a:cf:00:bd:1c:74",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-715644",
	                        "5d903f1f5c35"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-715644 -n addons-715644
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-715644 logs -n 25: (1.095591683s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-545470 --alsologtostderr --binary-mirror http://127.0.0.1:44271 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-545470 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ -p binary-mirror-545470                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-545470 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ addons  │ enable dashboard -p addons-715644                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-715644                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ start   │ -p addons-715644 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:15 UTC │
	│ addons  │ addons-715644 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:15 UTC │                     │
	│ addons  │ addons-715644 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-715644 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ ssh     │ addons-715644 ssh cat /opt/local-path-provisioner/pvc-b07547c9-8acb-4c52-b115-c56befc42fff_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │ 24 Nov 25 13:16 UTC │
	│ addons  │ addons-715644 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ ip      │ addons-715644 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │ 24 Nov 25 13:16 UTC │
	│ addons  │ addons-715644 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-715644                                                                                                                                                                                                                                                                                                                                                                                           │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │ 24 Nov 25 13:16 UTC │
	│ addons  │ addons-715644 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ ssh     │ addons-715644 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ addons-715644 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ ip      │ addons-715644 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-715644        │ jenkins │ v1.37.0 │ 24 Nov 25 13:18 UTC │ 24 Nov 25 13:18 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:43.853072  352949 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:43.853306  352949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:43.853316  352949 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:43.853322  352949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:43.853842  352949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:13:43.854698  352949 out.go:368] Setting JSON to false
	I1124 13:13:43.855597  352949 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6971,"bootTime":1763983053,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:13:43.855681  352949 start.go:143] virtualization: kvm guest
	I1124 13:13:43.857229  352949 out.go:179] * [addons-715644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:13:43.858597  352949 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:13:43.858605  352949 notify.go:221] Checking for updates...
	I1124 13:13:43.861197  352949 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:43.862263  352949 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:13:43.863302  352949 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:13:43.864321  352949 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:13:43.865302  352949 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:13:43.866468  352949 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:43.887843  352949 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:13:43.887993  352949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:43.942603  352949 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 13:13:43.933195472 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:43.942742  352949 docker.go:319] overlay module found
	I1124 13:13:43.944231  352949 out.go:179] * Using the docker driver based on user configuration
	I1124 13:13:43.945222  352949 start.go:309] selected driver: docker
	I1124 13:13:43.945233  352949 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:43.945243  352949 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:13:43.945764  352949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:44.003907  352949 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 13:13:43.99409835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:44.004143  352949 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:44.004421  352949 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:13:44.005883  352949 out.go:179] * Using Docker driver with root privileges
	I1124 13:13:44.006940  352949 cni.go:84] Creating CNI manager for ""
	I1124 13:13:44.007029  352949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:13:44.007046  352949 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:13:44.007122  352949 start.go:353] cluster config:
	{Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 13:13:44.008281  352949 out.go:179] * Starting "addons-715644" primary control-plane node in "addons-715644" cluster
	I1124 13:13:44.009229  352949 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:13:44.010175  352949 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:13:44.011119  352949 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:13:44.011145  352949 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:13:44.011151  352949 cache.go:65] Caching tarball of preloaded images
	I1124 13:13:44.011207  352949 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:13:44.011227  352949 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:13:44.011236  352949 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:13:44.011562  352949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/config.json ...
	I1124 13:13:44.011595  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/config.json: {Name:mkb5f591b550421bc01d9518e6a72a508d786dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:13:44.026290  352949 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:44.026392  352949 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:13:44.026407  352949 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:13:44.026412  352949 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:13:44.026421  352949 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:13:44.026425  352949 cache.go:172] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 13:13:56.067733  352949 cache.go:174] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 13:13:56.067772  352949 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:13:56.067841  352949 start.go:360] acquireMachinesLock for addons-715644: {Name:mk09735476b717614bfd96b379af3529b0f6a051 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:13:56.067960  352949 start.go:364] duration metric: took 97.197µs to acquireMachinesLock for "addons-715644"
	I1124 13:13:56.067990  352949 start.go:93] Provisioning new machine with config: &{Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:13:56.068069  352949 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:13:56.069691  352949 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 13:13:56.069956  352949 start.go:159] libmachine.API.Create for "addons-715644" (driver="docker")
	I1124 13:13:56.069991  352949 client.go:173] LocalClient.Create starting
	I1124 13:13:56.070091  352949 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:13:56.184274  352949 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:13:56.218806  352949 cli_runner.go:164] Run: docker network inspect addons-715644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:13:56.234342  352949 cli_runner.go:211] docker network inspect addons-715644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:13:56.234406  352949 network_create.go:284] running [docker network inspect addons-715644] to gather additional debugging logs...
	I1124 13:13:56.234431  352949 cli_runner.go:164] Run: docker network inspect addons-715644
	W1124 13:13:56.249128  352949 cli_runner.go:211] docker network inspect addons-715644 returned with exit code 1
	I1124 13:13:56.249153  352949 network_create.go:287] error running [docker network inspect addons-715644]: docker network inspect addons-715644: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-715644 not found
	I1124 13:13:56.249170  352949 network_create.go:289] output of [docker network inspect addons-715644]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-715644 not found
	
	** /stderr **
	I1124 13:13:56.249241  352949 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:13:56.265199  352949 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ba49d0}
	I1124 13:13:56.265252  352949 network_create.go:124] attempt to create docker network addons-715644 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 13:13:56.265303  352949 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-715644 addons-715644
	I1124 13:13:56.309737  352949 network_create.go:108] docker network addons-715644 192.168.49.0/24 created
	I1124 13:13:56.309766  352949 kic.go:121] calculated static IP "192.168.49.2" for the "addons-715644" container
	I1124 13:13:56.309831  352949 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:13:56.324350  352949 cli_runner.go:164] Run: docker volume create addons-715644 --label name.minikube.sigs.k8s.io=addons-715644 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:13:56.340577  352949 oci.go:103] Successfully created a docker volume addons-715644
	I1124 13:13:56.340637  352949 cli_runner.go:164] Run: docker run --rm --name addons-715644-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-715644 --entrypoint /usr/bin/test -v addons-715644:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:14:02.404337  352949 cli_runner.go:217] Completed: docker run --rm --name addons-715644-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-715644 --entrypoint /usr/bin/test -v addons-715644:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (6.063644909s)
	I1124 13:14:02.404375  352949 oci.go:107] Successfully prepared a docker volume addons-715644
	I1124 13:14:02.404424  352949 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:02.404438  352949 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:14:02.404499  352949 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-715644:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:14:06.599593  352949 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-715644:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.195023164s)
	I1124 13:14:06.599628  352949 kic.go:203] duration metric: took 4.195185794s to extract preloaded images to volume ...
	W1124 13:14:06.599703  352949 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:14:06.599746  352949 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:14:06.599791  352949 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:14:06.655585  352949 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-715644 --name addons-715644 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-715644 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-715644 --network addons-715644 --ip 192.168.49.2 --volume addons-715644:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:14:06.924593  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Running}}
	I1124 13:14:06.943231  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:06.959799  352949 cli_runner.go:164] Run: docker exec addons-715644 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:14:07.000871  352949 oci.go:144] the created container "addons-715644" has a running status.
	I1124 13:14:07.000924  352949 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa...
	I1124 13:14:07.057848  352949 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:14:07.085706  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:07.103909  352949 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:14:07.103940  352949 kic_runner.go:114] Args: [docker exec --privileged addons-715644 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:14:07.153586  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:07.173279  352949 machine.go:94] provisionDockerMachine start ...
	I1124 13:14:07.173416  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:07.192844  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:07.193239  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:07.193263  352949 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:14:07.194053  352949 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38264->127.0.0.1:33143: read: connection reset by peer
	I1124 13:14:10.336212  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-715644
	
	I1124 13:14:10.336244  352949 ubuntu.go:182] provisioning hostname "addons-715644"
	I1124 13:14:10.336314  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.353089  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:10.353359  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:10.353373  352949 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-715644 && echo "addons-715644" | sudo tee /etc/hostname
	I1124 13:14:10.500427  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-715644
	
	I1124 13:14:10.500501  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.517607  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:10.517819  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:10.517836  352949 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-715644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-715644/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-715644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:14:10.656744  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:14:10.656777  352949 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:14:10.656809  352949 ubuntu.go:190] setting up certificates
	I1124 13:14:10.656831  352949 provision.go:84] configureAuth start
	I1124 13:14:10.656904  352949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-715644
	I1124 13:14:10.673100  352949 provision.go:143] copyHostCerts
	I1124 13:14:10.673159  352949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:14:10.673267  352949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:14:10.673326  352949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:14:10.673372  352949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.addons-715644 san=[127.0.0.1 192.168.49.2 addons-715644 localhost minikube]
	I1124 13:14:10.709635  352949 provision.go:177] copyRemoteCerts
	I1124 13:14:10.709683  352949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:14:10.709724  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.725459  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:10.824258  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:14:10.842806  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 13:14:10.859233  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:14:10.875249  352949 provision.go:87] duration metric: took 218.402556ms to configureAuth
	I1124 13:14:10.875272  352949 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:14:10.875430  352949 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:10.875549  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.892045  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:10.892246  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:10.892261  352949 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:14:11.168289  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:14:11.168315  352949 machine.go:97] duration metric: took 3.995006612s to provisionDockerMachine
	I1124 13:14:11.168329  352949 client.go:176] duration metric: took 15.098328866s to LocalClient.Create
	I1124 13:14:11.168352  352949 start.go:167] duration metric: took 15.098397897s to libmachine.API.Create "addons-715644"
	I1124 13:14:11.168361  352949 start.go:293] postStartSetup for "addons-715644" (driver="docker")
	I1124 13:14:11.168369  352949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:14:11.168439  352949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:14:11.168485  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.184779  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.286101  352949 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:14:11.289416  352949 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:14:11.289449  352949 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:14:11.289460  352949 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:14:11.289509  352949 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:14:11.289531  352949 start.go:296] duration metric: took 121.165506ms for postStartSetup
	I1124 13:14:11.289809  352949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-715644
	I1124 13:14:11.306043  352949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/config.json ...
	I1124 13:14:11.306258  352949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:14:11.306304  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.322081  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.418314  352949 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:14:11.422551  352949 start.go:128] duration metric: took 15.354467204s to createHost
	I1124 13:14:11.422572  352949 start.go:83] releasing machines lock for "addons-715644", held for 15.354597577s
	I1124 13:14:11.422620  352949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-715644
	I1124 13:14:11.438530  352949 ssh_runner.go:195] Run: cat /version.json
	I1124 13:14:11.438575  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.438650  352949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:14:11.438734  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.455653  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.456204  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.603820  352949 ssh_runner.go:195] Run: systemctl --version
	I1124 13:14:11.609866  352949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:14:11.642004  352949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:14:11.646231  352949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:14:11.646283  352949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:14:11.670256  352949 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:14:11.670279  352949 start.go:496] detecting cgroup driver to use...
	I1124 13:14:11.670305  352949 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:14:11.670341  352949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:14:11.684860  352949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:14:11.695829  352949 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:14:11.695876  352949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:14:11.710493  352949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:14:11.727039  352949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:14:11.805432  352949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:14:11.889185  352949 docker.go:234] disabling docker service ...
	I1124 13:14:11.889238  352949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:14:11.906513  352949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:14:11.917760  352949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:14:11.997162  352949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:14:12.073235  352949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:14:12.084183  352949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:14:12.096903  352949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:14:12.096975  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.106365  352949 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:14:12.106417  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.114230  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.122280  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.130141  352949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:14:12.137320  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.145035  352949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.157014  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.164775  352949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:14:12.171484  352949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:14:12.178421  352949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:12.251114  352949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:14:12.375877  352949 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:14:12.375973  352949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:14:12.379708  352949 start.go:564] Will wait 60s for crictl version
	I1124 13:14:12.379772  352949 ssh_runner.go:195] Run: which crictl
	I1124 13:14:12.383255  352949 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:14:12.406179  352949 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:14:12.406268  352949 ssh_runner.go:195] Run: crio --version
	I1124 13:14:12.432345  352949 ssh_runner.go:195] Run: crio --version
	I1124 13:14:12.460361  352949 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:14:12.461448  352949 cli_runner.go:164] Run: docker network inspect addons-715644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:14:12.477332  352949 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 13:14:12.481115  352949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:12.490727  352949 kubeadm.go:884] updating cluster {Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:14:12.490858  352949 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:12.490940  352949 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:12.522061  352949 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:12.522080  352949 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:14:12.522116  352949 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:12.547861  352949 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:12.547879  352949 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:14:12.547900  352949 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 13:14:12.548021  352949 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-715644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:14:12.548089  352949 ssh_runner.go:195] Run: crio config
	I1124 13:14:12.591119  352949 cni.go:84] Creating CNI manager for ""
	I1124 13:14:12.591139  352949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:12.591157  352949 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:14:12.591179  352949 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-715644 NodeName:addons-715644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:14:12.591309  352949 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-715644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:14:12.591366  352949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:14:12.599058  352949 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:14:12.599116  352949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:14:12.606317  352949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 13:14:12.617777  352949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:14:12.631701  352949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 13:14:12.642972  352949 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:14:12.646219  352949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:12.655094  352949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:12.731814  352949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:14:12.752757  352949 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644 for IP: 192.168.49.2
	I1124 13:14:12.752776  352949 certs.go:195] generating shared ca certs ...
	I1124 13:14:12.752793  352949 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.752917  352949 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:14:12.840736  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt ...
	I1124 13:14:12.840761  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt: {Name:mkce1262ae281136b1dd62caba3163658cacaba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.840918  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key ...
	I1124 13:14:12.840930  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key: {Name:mk393d7fd776167e6c04ca0ef96f76563f922aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.841003  352949 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:14:12.907222  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt ...
	I1124 13:14:12.907244  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt: {Name:mkd68573e62099628351083591bcfc80d3c6f763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.907366  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key ...
	I1124 13:14:12.907376  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key: {Name:mk59b10792858e63a60454559819b9d0f6fa8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.907436  352949 certs.go:257] generating profile certs ...
	I1124 13:14:12.907491  352949 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.key
	I1124 13:14:12.907504  352949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt with IP's: []
	I1124 13:14:13.031714  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt ...
	I1124 13:14:13.031732  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: {Name:mke3aa47a8cb6947e96555de743329f99a5d82b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.031851  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.key ...
	I1124 13:14:13.031861  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.key: {Name:mk4f4b2ac4523e3659e5b2daaf0afaa4bb4ea022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.031952  352949 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1
	I1124 13:14:13.031976  352949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 13:14:13.100156  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1 ...
	I1124 13:14:13.100173  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1: {Name:mke28e301648f310171720622b136d1bceea46ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.100269  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1 ...
	I1124 13:14:13.100280  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1: {Name:mk3fd3ee9011a1c205f4f9f94cbbf968defc546b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.100339  352949 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt
	I1124 13:14:13.100406  352949 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key
	I1124 13:14:13.100452  352949 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key
	I1124 13:14:13.100467  352949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt with IP's: []
	I1124 13:14:13.378126  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt ...
	I1124 13:14:13.378152  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt: {Name:mkd4ff4f81b50eccf5d3bea5af6baa43a518412b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.378291  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key ...
	I1124 13:14:13.378304  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key: {Name:mkd9fc84243a34c10c818d9d1ec38eff074241d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.378467  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:14:13.378504  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:14:13.378529  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:14:13.378554  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:14:13.379176  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:14:13.396704  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:14:13.412924  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:14:13.429328  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:14:13.445181  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 13:14:13.461199  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:14:13.477680  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:14:13.495549  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:14:13.512412  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:14:13.530370  352949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:14:13.541613  352949 ssh_runner.go:195] Run: openssl version
	I1124 13:14:13.547188  352949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:14:13.556985  352949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:13.560456  352949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:13.560504  352949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:13.593379  352949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:14:13.601003  352949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:14:13.604173  352949 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:14:13.604226  352949 kubeadm.go:401] StartCluster: {Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:13.604311  352949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:14:13.604357  352949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:14:13.629087  352949 cri.go:89] found id: ""
	I1124 13:14:13.629146  352949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:14:13.636291  352949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:14:13.643383  352949 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:14:13.643440  352949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:14:13.650505  352949 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:14:13.650529  352949 kubeadm.go:158] found existing configuration files:
	
	I1124 13:14:13.650559  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:14:13.657445  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:14:13.657486  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:14:13.664263  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:14:13.671039  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:14:13.671077  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:14:13.677668  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:14:13.684768  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:14:13.684802  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:14:13.691347  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:14:13.698071  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:14:13.698107  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:14:13.704799  352949 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:14:13.739672  352949 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:14:13.739726  352949 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:14:13.758545  352949 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:14:13.758625  352949 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:14:13.758689  352949 kubeadm.go:319] OS: Linux
	I1124 13:14:13.758762  352949 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:14:13.758827  352949 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:14:13.758918  352949 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:14:13.758990  352949 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:14:13.759062  352949 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:14:13.759134  352949 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:14:13.759201  352949 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:14:13.759281  352949 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:14:13.811301  352949 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:14:13.811447  352949 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:14:13.811598  352949 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:14:13.817956  352949 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:14:13.820250  352949 out.go:252]   - Generating certificates and keys ...
	I1124 13:14:13.820337  352949 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:14:13.820419  352949 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:14:13.979821  352949 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:14:14.103757  352949 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:14:14.268116  352949 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:14:14.693704  352949 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:14:15.218681  352949 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:14:15.218830  352949 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-715644 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:15.306494  352949 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:14:15.306610  352949 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-715644 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:15.519049  352949 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:14:16.050571  352949 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:14:16.508260  352949 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:14:16.508713  352949 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:14:17.130476  352949 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:14:17.300980  352949 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:14:17.531181  352949 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:14:18.094485  352949 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:14:18.137664  352949 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:14:18.138146  352949 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:14:18.141613  352949 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:14:18.143338  352949 out.go:252]   - Booting up control plane ...
	I1124 13:14:18.143422  352949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:14:18.143732  352949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:14:18.144495  352949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:14:18.157250  352949 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:14:18.157407  352949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:14:18.163305  352949 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:14:18.163523  352949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:14:18.163575  352949 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:14:18.259686  352949 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:14:18.259843  352949 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:14:19.760348  352949 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500781462s
	I1124 13:14:19.763186  352949 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:14:19.763304  352949 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 13:14:19.763424  352949 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:14:19.763546  352949 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:14:21.299378  352949 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.533577331s
	I1124 13:14:21.618017  352949 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.854821937s
	I1124 13:14:23.264809  352949 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501617195s
	I1124 13:14:23.275838  352949 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:14:23.283407  352949 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:14:23.290793  352949 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:14:23.291102  352949 kubeadm.go:319] [mark-control-plane] Marking the node addons-715644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:14:23.297555  352949 kubeadm.go:319] [bootstrap-token] Using token: za5myl.fnbisrs7rdfrxqnj
	I1124 13:14:23.299509  352949 out.go:252]   - Configuring RBAC rules ...
	I1124 13:14:23.299671  352949 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:14:23.301657  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:14:23.306403  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:14:23.308580  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:14:23.310682  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:14:23.313044  352949 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:14:23.671684  352949 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:14:24.083215  352949 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:14:24.670783  352949 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:14:24.671685  352949 kubeadm.go:319] 
	I1124 13:14:24.671793  352949 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:14:24.671811  352949 kubeadm.go:319] 
	I1124 13:14:24.671977  352949 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:14:24.671988  352949 kubeadm.go:319] 
	I1124 13:14:24.672022  352949 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:14:24.672126  352949 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:14:24.672217  352949 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:14:24.672235  352949 kubeadm.go:319] 
	I1124 13:14:24.672320  352949 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:14:24.672329  352949 kubeadm.go:319] 
	I1124 13:14:24.672370  352949 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:14:24.672388  352949 kubeadm.go:319] 
	I1124 13:14:24.672475  352949 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:14:24.672577  352949 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:14:24.672683  352949 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:14:24.672692  352949 kubeadm.go:319] 
	I1124 13:14:24.672798  352949 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:14:24.672926  352949 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:14:24.672944  352949 kubeadm.go:319] 
	I1124 13:14:24.673072  352949 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token za5myl.fnbisrs7rdfrxqnj \
	I1124 13:14:24.673230  352949 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:14:24.673266  352949 kubeadm.go:319] 	--control-plane 
	I1124 13:14:24.673277  352949 kubeadm.go:319] 
	I1124 13:14:24.673379  352949 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:14:24.673388  352949 kubeadm.go:319] 
	I1124 13:14:24.673487  352949 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token za5myl.fnbisrs7rdfrxqnj \
	I1124 13:14:24.673619  352949 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:14:24.675412  352949 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:14:24.675554  352949 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:14:24.675580  352949 cni.go:84] Creating CNI manager for ""
	I1124 13:14:24.675587  352949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:24.676941  352949 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:14:24.677984  352949 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:14:24.682435  352949 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:14:24.682451  352949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:14:24.695115  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:14:24.878275  352949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:14:24.878369  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:24.878383  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-715644 minikube.k8s.io/updated_at=2025_11_24T13_14_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=addons-715644 minikube.k8s.io/primary=true
	I1124 13:14:24.958992  352949 ops.go:34] apiserver oom_adj: -16
	I1124 13:14:24.959132  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:25.459220  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:25.959464  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:26.459273  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:26.959370  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:27.459432  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:27.959234  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:28.459212  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:28.960002  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:29.460125  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:29.959518  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:30.021632  352949 kubeadm.go:1114] duration metric: took 5.143325124s to wait for elevateKubeSystemPrivileges
	I1124 13:14:30.021680  352949 kubeadm.go:403] duration metric: took 16.417449219s to StartCluster
	I1124 13:14:30.021723  352949 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:30.021843  352949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:14:30.022474  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:30.023364  352949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:14:30.023390  352949 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:14:30.023470  352949 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 13:14:30.023625  352949 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:30.023639  352949 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-715644"
	I1124 13:14:30.023645  352949 addons.go:70] Setting yakd=true in profile "addons-715644"
	I1124 13:14:30.023666  352949 addons.go:239] Setting addon yakd=true in "addons-715644"
	I1124 13:14:30.023675  352949 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-715644"
	I1124 13:14:30.023689  352949 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-715644"
	I1124 13:14:30.023690  352949 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-715644"
	I1124 13:14:30.023707  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023715  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023720  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023752  352949 addons.go:70] Setting ingress=true in profile "addons-715644"
	I1124 13:14:30.023781  352949 addons.go:239] Setting addon ingress=true in "addons-715644"
	I1124 13:14:30.023814  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023844  352949 addons.go:70] Setting storage-provisioner=true in profile "addons-715644"
	I1124 13:14:30.023869  352949 addons.go:239] Setting addon storage-provisioner=true in "addons-715644"
	I1124 13:14:30.023914  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024035  352949 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-715644"
	I1124 13:14:30.024087  352949 addons.go:70] Setting volumesnapshots=true in profile "addons-715644"
	I1124 13:14:30.024124  352949 addons.go:239] Setting addon volumesnapshots=true in "addons-715644"
	I1124 13:14:30.024133  352949 addons.go:70] Setting registry=true in profile "addons-715644"
	I1124 13:14:30.024159  352949 addons.go:239] Setting addon registry=true in "addons-715644"
	I1124 13:14:30.024175  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024177  352949 addons.go:70] Setting default-storageclass=true in profile "addons-715644"
	I1124 13:14:30.024234  352949 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-715644"
	I1124 13:14:30.024303  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024315  352949 addons.go:70] Setting gcp-auth=true in profile "addons-715644"
	I1124 13:14:30.024370  352949 mustload.go:66] Loading cluster: addons-715644
	I1124 13:14:30.024406  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024530  352949 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:30.024603  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024748  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024802  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024880  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024126  352949 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-715644"
	I1124 13:14:30.025341  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.026481  352949 out.go:179] * Verifying Kubernetes components...
	I1124 13:14:30.026579  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024303  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024028  352949 addons.go:70] Setting volcano=true in profile "addons-715644"
	I1124 13:14:30.027198  352949 addons.go:239] Setting addon volcano=true in "addons-715644"
	I1124 13:14:30.027249  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024958  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024060  352949 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-715644"
	I1124 13:14:30.027558  352949 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-715644"
	I1124 13:14:30.028045  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.028317  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.029557  352949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:30.024072  352949 addons.go:70] Setting cloud-spanner=true in profile "addons-715644"
	I1124 13:14:30.031595  352949 addons.go:239] Setting addon cloud-spanner=true in "addons-715644"
	I1124 13:14:30.031641  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.032192  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025054  352949 addons.go:70] Setting inspektor-gadget=true in profile "addons-715644"
	I1124 13:14:30.033206  352949 addons.go:239] Setting addon inspektor-gadget=true in "addons-715644"
	I1124 13:14:30.033233  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.033707  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025111  352949 addons.go:70] Setting registry-creds=true in profile "addons-715644"
	I1124 13:14:30.033942  352949 addons.go:239] Setting addon registry-creds=true in "addons-715644"
	I1124 13:14:30.033984  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.034440  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025125  352949 addons.go:70] Setting metrics-server=true in profile "addons-715644"
	I1124 13:14:30.037043  352949 addons.go:239] Setting addon metrics-server=true in "addons-715644"
	I1124 13:14:30.037089  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.025148  352949 addons.go:70] Setting ingress-dns=true in profile "addons-715644"
	I1124 13:14:30.037518  352949 addons.go:239] Setting addon ingress-dns=true in "addons-715644"
	I1124 13:14:30.037559  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025192  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.037830  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.039688  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.040389  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.067623  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.078785  352949 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 13:14:30.080040  352949 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:14:30.080104  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 13:14:30.080550  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.082597  352949 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 13:14:30.085253  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 13:14:30.085308  352949 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 13:14:30.085971  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.086149  352949 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 13:14:30.087156  352949 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:14:30.087177  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 13:14:30.087222  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.091527  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 13:14:30.091881  352949 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 13:14:30.092547  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 13:14:30.092949  352949 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 13:14:30.093004  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.093766  352949 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:14:30.093780  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 13:14:30.093831  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.094031  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 13:14:30.096357  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 13:14:30.098556  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 13:14:30.098705  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 13:14:30.099707  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:14:30.100711  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 13:14:30.101769  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 13:14:30.101884  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:14:30.102938  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 13:14:30.103141  352949 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:14:30.103155  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 13:14:30.103210  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.106050  352949 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 13:14:30.106095  352949 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:14:30.106373  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 13:14:30.107526  352949 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 13:14:30.107593  352949 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-715644"
	I1124 13:14:30.113397  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.112998  352949 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:14:30.114422  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:14:30.114477  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.113045  352949 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 13:14:30.114719  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 13:14:30.114762  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.115798  352949 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:14:30.115835  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 13:14:30.115867  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.116008  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.116819  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 13:14:30.117937  352949 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 13:14:30.118070  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 13:14:30.118084  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 13:14:30.118148  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.118929  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 13:14:30.118944  352949 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 13:14:30.118984  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	W1124 13:14:30.127090  352949 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 13:14:30.151569  352949 addons.go:239] Setting addon default-storageclass=true in "addons-715644"
	I1124 13:14:30.151653  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.151870  352949 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 13:14:30.151902  352949 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 13:14:30.152533  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.154032  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.155362  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.156778  352949 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:14:30.156797  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 13:14:30.156847  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.157294  352949 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 13:14:30.158648  352949 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 13:14:30.158710  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 13:14:30.159392  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.165469  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.169558  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.169652  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.175749  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.176158  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.181050  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.201739  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.202987  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.203302  352949 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 13:14:30.204507  352949 out.go:179]   - Using image docker.io/busybox:stable
	I1124 13:14:30.208796  352949 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:14:30.208816  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 13:14:30.208880  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.209837  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.211980  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	W1124 13:14:30.214809  352949 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:14:30.214842  352949 retry.go:31] will retry after 363.076161ms: ssh: handshake failed: EOF
	I1124 13:14:30.215877  352949 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:14:30.216031  352949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:14:30.216202  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.217107  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.220743  352949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:14:30.239873  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.253780  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	W1124 13:14:30.254883  352949 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:14:30.254957  352949 retry.go:31] will retry after 215.076815ms: ssh: handshake failed: EOF
	I1124 13:14:30.270352  352949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:14:30.328341  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:14:30.334848  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:14:30.367436  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:14:30.378381  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 13:14:30.378405  352949 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 13:14:30.381479  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:14:30.381546  352949 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 13:14:30.381634  352949 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 13:14:30.384512  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:14:30.390266  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 13:14:30.390331  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 13:14:30.391642  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 13:14:30.392143  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:14:30.395534  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 13:14:30.395547  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 13:14:30.404643  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:14:30.414806  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:14:30.417018  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 13:14:30.417036  352949 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 13:14:30.435872  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 13:14:30.435907  352949 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 13:14:30.438437  352949 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 13:14:30.438493  352949 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 13:14:30.447863  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 13:14:30.447881  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 13:14:30.464406  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 13:14:30.464475  352949 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 13:14:30.502244  352949 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 13:14:30.502291  352949 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 13:14:30.511373  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:14:30.511400  352949 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 13:14:30.514197  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:14:30.514215  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 13:14:30.518993  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 13:14:30.519052  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 13:14:30.556351  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:14:30.559526  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 13:14:30.559554  352949 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 13:14:30.561247  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:14:30.576813  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 13:14:30.576836  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 13:14:30.584062  352949 node_ready.go:35] waiting up to 6m0s for node "addons-715644" to be "Ready" ...
	I1124 13:14:30.584312  352949 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 13:14:30.596177  352949 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:14:30.596203  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 13:14:30.642379  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:14:30.646646  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 13:14:30.646674  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 13:14:30.700202  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:14:30.703390  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 13:14:30.703411  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 13:14:30.757934  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 13:14:30.757964  352949 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 13:14:30.790334  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 13:14:30.790360  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 13:14:30.813297  352949 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 13:14:30.813325  352949 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 13:14:30.826630  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 13:14:30.826712  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 13:14:30.858706  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:14:30.858832  352949 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 13:14:30.882285  352949 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:14:30.882315  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 13:14:30.920395  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:14:30.943062  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:14:31.093803  352949 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-715644" context rescaled to 1 replicas
	I1124 13:14:31.521729  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.140134457s)
	I1124 13:14:31.521771  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.137227397s)
	I1124 13:14:31.521781  352949 addons.go:495] Verifying addon ingress=true in "addons-715644"
	I1124 13:14:31.521823  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.130126519s)
	I1124 13:14:31.521835  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.129675424s)
	I1124 13:14:31.521934  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.117258952s)
	I1124 13:14:31.522028  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.107196596s)
	I1124 13:14:31.522180  352949 addons.go:495] Verifying addon metrics-server=true in "addons-715644"
	I1124 13:14:31.523502  352949 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-715644 service yakd-dashboard -n yakd-dashboard
	
	I1124 13:14:31.523512  352949 out.go:179] * Verifying ingress addon...
	I1124 13:14:31.525324  352949 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 13:14:31.527543  352949 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:14:31.955201  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.312770844s)
	W1124 13:14:31.955252  352949 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:14:31.955281  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255043243s)
	I1124 13:14:31.955281  352949 retry.go:31] will retry after 351.390124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:14:31.955520  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.035079932s)
	I1124 13:14:31.955554  352949 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-715644"
	I1124 13:14:31.955576  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.012359968s)
	I1124 13:14:31.955604  352949 addons.go:495] Verifying addon registry=true in "addons-715644"
	I1124 13:14:31.957089  352949 out.go:179] * Verifying registry addon...
	I1124 13:14:31.957089  352949 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 13:14:31.959430  352949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 13:14:31.959430  352949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 13:14:31.964965  352949 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:14:31.964980  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:31.966246  352949 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:14:31.966262  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:32.065791  352949 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:14:32.065811  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:32.307678  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:14:32.463086  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:32.463086  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:32.563335  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:32.586258  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:32.961791  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:32.961845  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:33.028051  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:33.462507  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:33.462545  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:33.562981  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:33.962452  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:33.962583  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:34.027851  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:34.463179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:34.463324  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:34.563795  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:34.586764  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:34.766384  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.458659892s)
	I1124 13:14:34.962220  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:34.962299  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:35.028241  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:35.462723  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:35.462733  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:35.528036  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:35.962468  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:35.962562  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:36.027766  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:36.462128  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:36.462235  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:36.562518  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:36.962056  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:36.962113  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:37.027979  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:37.086469  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:37.462568  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:37.462589  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:37.563281  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:37.686121  352949 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 13:14:37.686198  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:37.702762  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:37.807388  352949 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 13:14:37.818879  352949 addons.go:239] Setting addon gcp-auth=true in "addons-715644"
	I1124 13:14:37.818945  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:37.819275  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:37.835233  352949 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 13:14:37.835284  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:37.851315  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:37.948367  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:14:37.949532  352949 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 13:14:37.950561  352949 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 13:14:37.950577  352949 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 13:14:37.962550  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:37.962687  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:37.963502  352949 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 13:14:37.963522  352949 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 13:14:37.975350  352949 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:14:37.975364  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 13:14:37.986998  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:14:38.028467  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:38.279109  352949 addons.go:495] Verifying addon gcp-auth=true in "addons-715644"
	I1124 13:14:38.280290  352949 out.go:179] * Verifying gcp-auth addon...
	I1124 13:14:38.282042  352949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 13:14:38.285528  352949 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 13:14:38.285543  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:38.461951  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:38.461997  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:38.527905  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:38.785142  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:38.962635  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:38.962761  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:39.027754  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:39.088429  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:39.284828  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:39.462734  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:39.462759  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:39.527840  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:39.785131  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:39.962458  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:39.962464  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:40.027425  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:40.284905  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:40.462464  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:40.462548  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:40.527653  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:40.784767  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:40.962269  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:40.962269  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:41.028148  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:41.285046  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:41.462240  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:41.462328  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:41.528241  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:41.586248  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:41.784853  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:41.962280  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:41.962371  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:42.028353  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:42.283994  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:42.462564  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:42.462564  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:42.527591  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:42.784528  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:42.961924  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:42.962009  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:43.027949  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:43.284806  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:43.462215  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:43.462302  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:43.528314  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:43.784488  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:43.962303  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:43.962454  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:44.028848  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:44.086648  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:44.285310  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:44.462470  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:44.462486  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:44.527395  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:44.784906  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:44.962282  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:44.962332  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:45.028246  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:45.284583  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:45.462270  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:45.462420  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:45.527474  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:45.784978  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:45.962206  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:45.962284  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:46.028178  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:46.086880  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:46.285368  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:46.462962  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:46.463070  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:46.527763  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:46.785179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:46.962655  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:46.962738  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:47.027882  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:47.285051  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:47.462467  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:47.462538  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:47.528687  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:47.784922  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:47.962443  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:47.962515  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:48.029172  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:48.284063  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:48.462306  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:48.462315  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:48.528301  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:48.585954  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:48.785333  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:48.962921  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:48.962989  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:49.027844  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:49.284860  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:49.462401  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:49.462554  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:49.527359  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:49.784730  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:49.962092  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:49.962119  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:50.028259  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:50.284541  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:50.461922  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:50.462047  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:50.528034  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:50.586864  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:50.785229  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:50.962711  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:50.962882  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:51.027687  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:51.284664  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:51.461848  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:51.461901  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:51.527763  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:51.784977  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:51.962283  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:51.962436  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:52.027518  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:52.284799  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:52.462369  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:52.462434  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:52.528516  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:52.784630  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:52.962096  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:52.962222  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:53.028238  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:53.085796  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:53.285305  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:53.462678  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:53.462701  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:53.527686  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:53.784634  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:53.962029  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:53.962045  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:54.028121  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:54.284580  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:54.462283  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:54.462404  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:54.527396  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:54.784671  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:54.962553  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:54.962623  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:55.027781  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:55.086954  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:55.285172  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:55.462634  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:55.462708  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:55.527991  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:55.785320  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:55.962624  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:55.962735  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:56.027898  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:56.285244  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:56.462490  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:56.462592  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:56.527524  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:56.784580  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:56.961821  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:56.961941  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:57.027502  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:57.284362  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:57.462451  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:57.462533  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:57.527641  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:57.586596  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:57.784916  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:57.961990  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:57.962129  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:58.028083  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:58.284938  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:58.462187  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:58.462263  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:58.528119  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:58.785135  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:58.962544  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:58.962767  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:59.027752  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:59.284746  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:59.462223  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:59.462234  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:59.528169  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:59.587037  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:59.784407  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:59.962816  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:59.962904  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:00.027854  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:00.284392  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:00.462754  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:00.462758  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:00.527707  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:00.785040  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:00.962409  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:00.962440  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:01.028454  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:01.284675  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:01.461944  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:01.462024  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:01.528099  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:01.785378  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:01.962444  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:01.962453  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:02.027336  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:02.086392  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:02.284872  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:02.462223  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:02.462346  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:02.528227  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:02.784811  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:02.962242  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:02.962307  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:03.028401  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:03.284849  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:03.462294  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:03.462391  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:03.527517  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:03.784612  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:03.962037  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:03.962087  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:04.028048  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:04.086775  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:04.285076  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:04.462575  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:04.462661  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:04.527786  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:04.784817  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:04.962263  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:04.962308  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:05.028237  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:05.285160  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:05.462786  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:05.462968  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:05.528323  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:05.784697  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:05.961824  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:05.962014  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:06.027671  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:06.284786  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:06.462000  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:06.462033  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:06.527907  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:06.586675  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:06.785118  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:06.962138  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:06.962319  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:07.028228  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:07.284170  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:07.462494  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:07.462600  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:07.527565  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:07.784975  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:07.962472  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:07.962520  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:08.027783  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:08.284926  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:08.462307  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:08.462388  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:08.527395  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:08.784638  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:08.962252  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:08.962274  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:09.028330  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:09.086315  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:09.284495  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:09.461584  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:09.461679  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:09.527612  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:09.784690  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:09.962182  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:09.962190  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:10.028269  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:10.284651  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:10.461984  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:10.462053  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:10.527857  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:10.785222  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:10.962372  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:10.962468  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:11.029520  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:11.086384  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:11.285074  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:11.462875  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:11.465090  352949 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:11.465124  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:11.530275  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:11.587174  352949 node_ready.go:49] node "addons-715644" is "Ready"
	I1124 13:15:11.587210  352949 node_ready.go:38] duration metric: took 41.003116201s for node "addons-715644" to be "Ready" ...
	I1124 13:15:11.587232  352949 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:15:11.587291  352949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:15:11.602456  352949 api_server.go:72] duration metric: took 41.579028892s to wait for apiserver process to appear ...
	I1124 13:15:11.602488  352949 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:15:11.602510  352949 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 13:15:11.607083  352949 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 13:15:11.608095  352949 api_server.go:141] control plane version: v1.34.1
	I1124 13:15:11.608124  352949 api_server.go:131] duration metric: took 5.627713ms to wait for apiserver health ...
	I1124 13:15:11.608136  352949 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:15:11.611793  352949 system_pods.go:59] 20 kube-system pods found
	I1124 13:15:11.611829  352949 system_pods.go:61] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:11.611841  352949 system_pods.go:61] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:11.611853  352949 system_pods.go:61] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:11.611866  352949 system_pods.go:61] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:11.611878  352949 system_pods.go:61] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:11.611897  352949 system_pods.go:61] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:11.611903  352949 system_pods.go:61] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:11.611908  352949 system_pods.go:61] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:11.611914  352949 system_pods.go:61] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:11.611925  352949 system_pods.go:61] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:11.611931  352949 system_pods.go:61] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:11.611937  352949 system_pods.go:61] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:11.611948  352949 system_pods.go:61] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:11.611955  352949 system_pods.go:61] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending
	I1124 13:15:11.611966  352949 system_pods.go:61] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:11.611975  352949 system_pods.go:61] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:11.611984  352949 system_pods.go:61] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending
	I1124 13:15:11.611994  352949 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.612006  352949 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.612023  352949 system_pods.go:61] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:11.612034  352949 system_pods.go:74] duration metric: took 3.890111ms to wait for pod list to return data ...
	I1124 13:15:11.612046  352949 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:15:11.614094  352949 default_sa.go:45] found service account: "default"
	I1124 13:15:11.614113  352949 default_sa.go:55] duration metric: took 2.058637ms for default service account to be created ...
	I1124 13:15:11.614122  352949 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:15:11.619342  352949 system_pods.go:86] 20 kube-system pods found
	I1124 13:15:11.619378  352949 system_pods.go:89] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:11.619389  352949 system_pods.go:89] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:11.619401  352949 system_pods.go:89] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:11.619409  352949 system_pods.go:89] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:11.619422  352949 system_pods.go:89] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:11.619432  352949 system_pods.go:89] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:11.619442  352949 system_pods.go:89] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:11.619448  352949 system_pods.go:89] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:11.619457  352949 system_pods.go:89] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:11.619466  352949 system_pods.go:89] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:11.619475  352949 system_pods.go:89] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:11.619484  352949 system_pods.go:89] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:11.619495  352949 system_pods.go:89] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:11.619501  352949 system_pods.go:89] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending
	I1124 13:15:11.619515  352949 system_pods.go:89] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:11.619526  352949 system_pods.go:89] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:11.619532  352949 system_pods.go:89] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending
	I1124 13:15:11.619544  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.619557  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.619567  352949 system_pods.go:89] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:11.619593  352949 retry.go:31] will retry after 238.25305ms: missing components: kube-dns
	I1124 13:15:11.785531  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:11.887749  352949 system_pods.go:86] 20 kube-system pods found
	I1124 13:15:11.887787  352949 system_pods.go:89] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:11.887798  352949 system_pods.go:89] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:11.887814  352949 system_pods.go:89] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:11.887824  352949 system_pods.go:89] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:11.887833  352949 system_pods.go:89] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:11.887846  352949 system_pods.go:89] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:11.887854  352949 system_pods.go:89] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:11.887860  352949 system_pods.go:89] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:11.887877  352949 system_pods.go:89] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:11.887918  352949 system_pods.go:89] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:11.887930  352949 system_pods.go:89] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:11.887936  352949 system_pods.go:89] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:11.887944  352949 system_pods.go:89] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:11.887953  352949 system_pods.go:89] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:11.887962  352949 system_pods.go:89] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:11.887970  352949 system_pods.go:89] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:11.887983  352949 system_pods.go:89] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:11.887991  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.887999  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.888007  352949 system_pods.go:89] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:11.888029  352949 retry.go:31] will retry after 313.084796ms: missing components: kube-dns
	I1124 13:15:11.985814  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:11.985861  352949 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:11.985874  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:12.028940  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:12.205709  352949 system_pods.go:86] 20 kube-system pods found
	I1124 13:15:12.205748  352949 system_pods.go:89] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:12.205756  352949 system_pods.go:89] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Running
	I1124 13:15:12.205765  352949 system_pods.go:89] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:12.205772  352949 system_pods.go:89] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:12.205780  352949 system_pods.go:89] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:12.205786  352949 system_pods.go:89] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:12.205791  352949 system_pods.go:89] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:12.205799  352949 system_pods.go:89] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:12.205809  352949 system_pods.go:89] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:12.205820  352949 system_pods.go:89] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:12.205828  352949 system_pods.go:89] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:12.205834  352949 system_pods.go:89] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:12.205844  352949 system_pods.go:89] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:12.205853  352949 system_pods.go:89] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:12.205865  352949 system_pods.go:89] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:12.205877  352949 system_pods.go:89] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:12.205902  352949 system_pods.go:89] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:12.205915  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:12.205928  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:12.205936  352949 system_pods.go:89] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Running
	I1124 13:15:12.205949  352949 system_pods.go:126] duration metric: took 591.819695ms to wait for k8s-apps to be running ...
	I1124 13:15:12.205963  352949 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:15:12.206015  352949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:15:12.219097  352949 system_svc.go:56] duration metric: took 13.126375ms WaitForService to wait for kubelet
	I1124 13:15:12.219124  352949 kubeadm.go:587] duration metric: took 42.195702775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:15:12.219152  352949 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:15:12.221562  352949 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:15:12.221586  352949 node_conditions.go:123] node cpu capacity is 8
	I1124 13:15:12.221604  352949 node_conditions.go:105] duration metric: took 2.446287ms to run NodePressure ...
	I1124 13:15:12.221616  352949 start.go:242] waiting for startup goroutines ...
	I1124 13:15:12.284948  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:12.462878  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:12.463071  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:12.529171  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:12.786248  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:12.964180  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:12.964312  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:13.029719  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:13.285869  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:13.463035  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:13.463065  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:13.528948  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:13.785860  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:13.963217  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:13.963413  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:14.029405  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:14.285023  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:14.463179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:14.463425  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:14.529232  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:14.785620  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:14.962753  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:14.962754  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:15.028691  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:15.285656  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:15.462506  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:15.462605  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:15.528449  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:15.785047  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:15.964194  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:15.964967  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:16.029263  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:16.287481  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:16.464683  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:16.464765  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:16.529052  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:16.786286  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:16.963548  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:16.963752  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:17.027907  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:17.285982  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:17.463140  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:17.463519  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:17.529090  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:17.786014  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:17.963095  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:17.963254  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:18.029046  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:18.286129  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:18.463373  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:18.463451  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:18.528853  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:18.786070  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:18.962649  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:18.962822  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:19.029098  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:19.303352  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:19.464041  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:19.464314  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:19.529511  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:19.785150  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:19.963170  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:19.963324  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.063560  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.285230  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:20.463298  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.463458  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.529258  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.785992  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:20.964256  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.966008  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.028729  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:21.285438  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:21.463664  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.463794  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.528675  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:21.785289  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:21.963994  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.964072  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.044093  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.284506  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:22.462314  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.462357  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.528550  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.785306  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:22.962713  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.962843  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.027915  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.285620  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:23.462477  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:23.462694  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.528054  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.785342  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:23.962352  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:23.962598  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.028595  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:24.286069  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.463669  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.463883  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.528384  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:24.785618  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.962989  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.963121  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.028651  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.285346  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.462929  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.463090  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.528620  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.785298  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.963336  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.963453  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.028972  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:26.287116  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.463168  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.463349  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.527656  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:26.785569  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.962731  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.962832  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.028655  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.285739  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.467262  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.467514  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.543490  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.785503  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.963068  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.963151  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.028418  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:28.285777  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.462253  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.462529  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.528824  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:28.785406  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.962102  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.962221  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.028217  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.284489  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.462611  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.462667  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.528778  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.786052  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.962648  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.962719  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.027507  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.285655  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.462531  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.462603  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.528213  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.784673  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.962502  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.962737  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.062587  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:31.284974  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.462582  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.462632  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.527901  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:31.785808  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.963050  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.963156  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.028566  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.285229  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.462622  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.462902  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.527774  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.789377  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.962911  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.963094  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.028378  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:33.285198  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.463479  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.463571  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.528200  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:33.784562  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.962448  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.962620  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.027845  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.285196  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.463299  352949 kapi.go:107] duration metric: took 1m2.503864265s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 13:15:34.463489  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.528533  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.785494  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.964405  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.030277  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.286092  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.463233  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.528929  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.786261  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.963360  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.029409  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.329650  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.485244  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.528506  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.785123  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.963503  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.029118  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:37.285979  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.462954  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.528080  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:37.785454  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.962838  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.029205  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.286653  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.463192  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.529102  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.785164  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.963634  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.028267  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:39.286292  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.463831  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.528677  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:39.787057  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.963138  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.029002  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.284729  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.462455  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.529401  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.784787  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.962258  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.028422  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:41.285462  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.463179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.528945  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:41.785535  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.962198  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.028859  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.286034  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.463225  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.529188  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.785058  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.962750  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:43.027820  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.285160  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:43.462838  352949 kapi.go:107] duration metric: took 1m11.503403005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 13:15:43.528005  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.784603  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.029246  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.286751  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.528672  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.785231  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:45.029497  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.285876  352949 kapi.go:107] duration metric: took 1m7.003829292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 13:15:45.287029  352949 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-715644 cluster.
	I1124 13:15:45.288263  352949 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 13:15:45.289528  352949 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 13:15:45.530576  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.030191  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.528515  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.028928  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.529040  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.029242  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.528547  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.029035  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.529062  352949 kapi.go:107] duration metric: took 1m18.003733484s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 13:15:49.530469  352949 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1124 13:15:49.531623  352949 addons.go:530] duration metric: took 1m19.508163981s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns inspektor-gadget cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1124 13:15:49.531662  352949 start.go:247] waiting for cluster config update ...
	I1124 13:15:49.531685  352949 start.go:256] writing updated cluster config ...
	I1124 13:15:49.531946  352949 ssh_runner.go:195] Run: rm -f paused
	I1124 13:15:49.536016  352949 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:15:49.538533  352949 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8kqrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.542206  352949 pod_ready.go:94] pod "coredns-66bc5c9577-8kqrg" is "Ready"
	I1124 13:15:49.542225  352949 pod_ready.go:86] duration metric: took 3.673868ms for pod "coredns-66bc5c9577-8kqrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.543863  352949 pod_ready.go:83] waiting for pod "etcd-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.547179  352949 pod_ready.go:94] pod "etcd-addons-715644" is "Ready"
	I1124 13:15:49.547195  352949 pod_ready.go:86] duration metric: took 3.314923ms for pod "etcd-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.548817  352949 pod_ready.go:83] waiting for pod "kube-apiserver-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.551864  352949 pod_ready.go:94] pod "kube-apiserver-addons-715644" is "Ready"
	I1124 13:15:49.551881  352949 pod_ready.go:86] duration metric: took 3.04732ms for pod "kube-apiserver-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.553470  352949 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.939409  352949 pod_ready.go:94] pod "kube-controller-manager-addons-715644" is "Ready"
	I1124 13:15:49.939443  352949 pod_ready.go:86] duration metric: took 385.955009ms for pod "kube-controller-manager-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:50.140540  352949 pod_ready.go:83] waiting for pod "kube-proxy-c7prv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:50.539859  352949 pod_ready.go:94] pod "kube-proxy-c7prv" is "Ready"
	I1124 13:15:50.539906  352949 pod_ready.go:86] duration metric: took 399.318831ms for pod "kube-proxy-c7prv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:50.740196  352949 pod_ready.go:83] waiting for pod "kube-scheduler-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:51.139228  352949 pod_ready.go:94] pod "kube-scheduler-addons-715644" is "Ready"
	I1124 13:15:51.139258  352949 pod_ready.go:86] duration metric: took 399.037371ms for pod "kube-scheduler-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:51.139275  352949 pod_ready.go:40] duration metric: took 1.603221221s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:15:51.184686  352949 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:15:51.186224  352949 out.go:179] * Done! kubectl is now configured to use "addons-715644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.493953025Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-pc4vt/POD" id=ea29f7cc-8794-47f8-b47b-99ad0023f21a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.494042819Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.500470776Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pc4vt Namespace:default ID:dc540c2f2f5e2fe88dc057c5bc8f672833ab32bb20bc19e896bea7019cff16ba UID:096a889c-1b51-4d11-a500-1f686d3330d1 NetNS:/var/run/netns/d69abe62-1c31-4969-995b-3d4a3d0f4998 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c92e48}] Aliases:map[]}"
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.50050766Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-pc4vt to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.510185708Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-pc4vt Namespace:default ID:dc540c2f2f5e2fe88dc057c5bc8f672833ab32bb20bc19e896bea7019cff16ba UID:096a889c-1b51-4d11-a500-1f686d3330d1 NetNS:/var/run/netns/d69abe62-1c31-4969-995b-3d4a3d0f4998 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c92e48}] Aliases:map[]}"
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.510288071Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-pc4vt for CNI network kindnet (type=ptp)"
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.511067888Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.511789114Z" level=info msg="Ran pod sandbox dc540c2f2f5e2fe88dc057c5bc8f672833ab32bb20bc19e896bea7019cff16ba with infra container: default/hello-world-app-5d498dc89-pc4vt/POD" id=ea29f7cc-8794-47f8-b47b-99ad0023f21a name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.512985833Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=80e4925d-2f9b-4a3b-aea1-1b5086bc5e6e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.513121151Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=80e4925d-2f9b-4a3b-aea1-1b5086bc5e6e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.51316653Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=80e4925d-2f9b-4a3b-aea1-1b5086bc5e6e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.513859528Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=71fcdc09-77dc-45b2-a5b2-c845eafb92c3 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:18:37 addons-715644 crio[779]: time="2025-11-24T13:18:37.529315364Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.34687625Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=71fcdc09-77dc-45b2-a5b2-c845eafb92c3 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.347411734Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=63f476c8-2195-4f7b-a96c-90fa203db416 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.348728805Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4d9d6bfd-fdc7-44da-84fa-1107db1ded11 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.352146163Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-pc4vt/hello-world-app" id=0aa30bbd-dc05-414c-b5a0-6408fd54c823 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.352242163Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.357763328Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.357973287Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/c73cc8f59ce5a1e9580c4a47ad5505840a1327568eb898511d5c2b775c3fa6e9/merged/etc/passwd: no such file or directory"
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.358010277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/c73cc8f59ce5a1e9580c4a47ad5505840a1327568eb898511d5c2b775c3fa6e9/merged/etc/group: no such file or directory"
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.358300969Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.388466592Z" level=info msg="Created container 25be03c79c6f359c379d00b3f8d233a0d783f20b0c525dae66ca95aeeb0d103f: default/hello-world-app-5d498dc89-pc4vt/hello-world-app" id=0aa30bbd-dc05-414c-b5a0-6408fd54c823 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.389029898Z" level=info msg="Starting container: 25be03c79c6f359c379d00b3f8d233a0d783f20b0c525dae66ca95aeeb0d103f" id=aaf5aceb-ccbc-41cc-a2c0-2aaa1a24a4c0 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:18:38 addons-715644 crio[779]: time="2025-11-24T13:18:38.390703083Z" level=info msg="Started container" PID=9808 containerID=25be03c79c6f359c379d00b3f8d233a0d783f20b0c525dae66ca95aeeb0d103f description=default/hello-world-app-5d498dc89-pc4vt/hello-world-app id=aaf5aceb-ccbc-41cc-a2c0-2aaa1a24a4c0 name=/runtime.v1.RuntimeService/StartContainer sandboxID=dc540c2f2f5e2fe88dc057c5bc8f672833ab32bb20bc19e896bea7019cff16ba
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	25be03c79c6f3       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   dc540c2f2f5e2       hello-world-app-5d498dc89-pc4vt            default
	e235aa339c9ac       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             2 minutes ago            Running             registry-creds                           0                   1298125a48ebc       registry-creds-764b6fb674-4tmmd            kube-system
	56fed20e8fd93       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   8d0527f91a959       nginx                                      default
	cdd07a7ef83b2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   0e1d54a5e14d2       busybox                                    default
	cac2d22f42517       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             2 minutes ago            Running             controller                               0                   af8e7dbb1cc4b       ingress-nginx-controller-6c8bf45fb-n6vc7   ingress-nginx
	431566734db48       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   5cfd206ad6fa9       gcp-auth-78565c9fb4-jllj4                  gcp-auth
	32b77a8342024       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	9946073c4dbc0       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	97f3de9ff4a38       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	3492cc9269215       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	d9134036f413d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   46d8edf92cf66       gadget-j5p27                               gadget
	db3d376ec41b7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                3 minutes ago            Running             node-driver-registrar                    0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	a9fad48eebd55       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     3 minutes ago            Running             nvidia-device-plugin-ctr                 0                   d119d41972e11       nvidia-device-plugin-daemonset-h8tqs       kube-system
	7f9d1b3fe4a90       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              3 minutes ago            Running             registry-proxy                           0                   2eddc137a3ac6       registry-proxy-kx44z                       kube-system
	de0cc746d3ed0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     3 minutes ago            Running             amd-gpu-device-plugin                    0                   02c01f0506934       amd-gpu-device-plugin-hxftx                kube-system
	93fbc223db37d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   3 minutes ago            Running             csi-external-health-monitor-controller   0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	a5ef3026f01f1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   b12546325a8f8       local-path-provisioner-648f6765c9-7hnlx    local-path-storage
	744e5383d888f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              patch                                    0                   8f8671d4b9e6c       ingress-nginx-admission-patch-ds6p4        ingress-nginx
	5949deef674ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   3 minutes ago            Exited              create                                   0                   5fca1df273341       ingress-nginx-admission-create-gq29m       ingress-nginx
	9be932139aaef       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             3 minutes ago            Running             csi-attacher                             0                   5d19ee3fadd15       csi-hostpath-attacher-0                    kube-system
	e91ca551d1e0e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   24ed0b6885160       snapshot-controller-7d9fbc56b8-7jfmk       kube-system
	edf878679786a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      3 minutes ago            Running             volume-snapshot-controller               0                   7489f2f2032be       snapshot-controller-7d9fbc56b8-9lv6w       kube-system
	17cd086a0a854       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              3 minutes ago            Running             csi-resizer                              0                   2aee0aff8abfc       csi-hostpath-resizer-0                     kube-system
	33bdbf096e506       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   aaff989c55fae       registry-6b586f9694-x6s72                  kube-system
	01ac83a9cfb43       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   91fbee87053a0       yakd-dashboard-5ff678cb9-jd5f7             yakd-dashboard
	bef94f1c94dd3       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   a28f02ad505a5       kube-ingress-dns-minikube                  kube-system
	83f5e4de5d194       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   919e585bb6c2e       metrics-server-85b7d694d7-4fdfd            kube-system
	68304ee706d1f       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               3 minutes ago            Running             cloud-spanner-emulator                   0                   3cadee47b97c8       cloud-spanner-emulator-5bdddb765-dk2gw     default
	9fc9fbc51a1d5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   99a478ee2f1bf       coredns-66bc5c9577-8kqrg                   kube-system
	80ca718552080       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   e18d47fc14725       storage-provisioner                        kube-system
	3c0239d349ace       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             4 minutes ago            Running             kindnet-cni                              0                   0181887543fda       kindnet-jb6km                              kube-system
	1cd2d69a4521d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             4 minutes ago            Running             kube-proxy                               0                   5732d1eea9eba       kube-proxy-c7prv                           kube-system
	f906d790e557c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   928929891d573       etcd-addons-715644                         kube-system
	8bd061f25cd27       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   f597e84326fc8       kube-controller-manager-addons-715644      kube-system
	73d6f909ae2dc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   545d7f48a30bf       kube-scheduler-addons-715644               kube-system
	e080d87ce42a1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   1e948645cfd17       kube-apiserver-addons-715644               kube-system
	
	
	==> coredns [9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e] <==
	[INFO] 10.244.0.22:34942 - 44377 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132247s
	[INFO] 10.244.0.22:46556 - 54642 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007460581s
	[INFO] 10.244.0.22:59028 - 31267 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007541779s
	[INFO] 10.244.0.22:52149 - 8979 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004224337s
	[INFO] 10.244.0.22:33442 - 18202 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005592987s
	[INFO] 10.244.0.22:40026 - 63095 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005546615s
	[INFO] 10.244.0.22:47433 - 26869 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006547095s
	[INFO] 10.244.0.22:35025 - 363 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00128106s
	[INFO] 10.244.0.22:40164 - 61138 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001543068s
	[INFO] 10.244.0.28:54640 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000251482s
	[INFO] 10.244.0.28:53444 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00019338s
	[INFO] 10.244.0.30:58553 - 61587 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000157846s
	[INFO] 10.244.0.30:37220 - 19950 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000242375s
	[INFO] 10.244.0.30:50637 - 20095 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000136572s
	[INFO] 10.244.0.30:49451 - 33402 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000187108s
	[INFO] 10.244.0.30:41848 - 1487 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000108527s
	[INFO] 10.244.0.30:55035 - 38475 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000171592s
	[INFO] 10.244.0.30:35453 - 18357 "AAAA IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.004896621s
	[INFO] 10.244.0.30:55083 - 46863 "A IN accounts.google.com.us-central1-a.c.k8s-minikube.internal. udp 75 false 512" NXDOMAIN qr,rd,ra 185 0.005570585s
	[INFO] 10.244.0.30:56328 - 13105 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005483676s
	[INFO] 10.244.0.30:38340 - 50111 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.015092956s
	[INFO] 10.244.0.30:53306 - 3113 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.005089296s
	[INFO] 10.244.0.30:33828 - 30771 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.007165607s
	[INFO] 10.244.0.30:49989 - 19461 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.001720215s
	[INFO] 10.244.0.30:46711 - 49352 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001815716s
	
	
	==> describe nodes <==
	Name:               addons-715644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-715644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=addons-715644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_14_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-715644
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-715644"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-715644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:18:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:16:26 +0000   Mon, 24 Nov 2025 13:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:16:26 +0000   Mon, 24 Nov 2025 13:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:16:26 +0000   Mon, 24 Nov 2025 13:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:16:26 +0000   Mon, 24 Nov 2025 13:15:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-715644
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                0d27353c-5710-4d14-a232-2bb0e65b7fcb
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m47s
	  default                     cloud-spanner-emulator-5bdddb765-dk2gw      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  default                     hello-world-app-5d498dc89-pc4vt             0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m26s
	  gadget                      gadget-j5p27                                0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  gcp-auth                    gcp-auth-78565c9fb4-jllj4                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-n6vc7    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         4m7s
	  kube-system                 amd-gpu-device-plugin-hxftx                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 coredns-66bc5c9577-8kqrg                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     4m9s
	  kube-system                 csi-hostpath-attacher-0                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 csi-hostpath-resizer-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 csi-hostpathplugin-vghhv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 etcd-addons-715644                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m14s
	  kube-system                 kindnet-jb6km                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      4m9s
	  kube-system                 kube-apiserver-addons-715644                250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m16s
	  kube-system                 kube-controller-manager-addons-715644       200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m14s
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 kube-proxy-c7prv                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m9s
	  kube-system                 kube-scheduler-addons-715644                100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m15s
	  kube-system                 metrics-server-85b7d694d7-4fdfd             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         4m7s
	  kube-system                 nvidia-device-plugin-daemonset-h8tqs        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 registry-6b586f9694-x6s72                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 registry-creds-764b6fb674-4tmmd             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m8s
	  kube-system                 registry-proxy-kx44z                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m27s
	  kube-system                 snapshot-controller-7d9fbc56b8-7jfmk        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 snapshot-controller-7d9fbc56b8-9lv6w        0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  local-path-storage          local-path-provisioner-648f6765c9-7hnlx     0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m7s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-jd5f7              0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     4m7s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m7s                   kube-proxy       
	  Normal  Starting                 4m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m19s (x2 over 4m19s)  kubelet          Node addons-715644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m19s (x2 over 4m19s)  kubelet          Node addons-715644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m19s                  kubelet          Node addons-715644 status is now: NodeHasSufficientPID
	  Normal  Starting                 4m15s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m14s                  kubelet          Node addons-715644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m14s                  kubelet          Node addons-715644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m14s                  kubelet          Node addons-715644 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m10s                  node-controller  Node addons-715644 event: Registered Node addons-715644 in Controller
	  Normal  NodeReady                3m27s                  kubelet          Node addons-715644 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e] <==
	{"level":"warn","ts":"2025-11-24T13:14:21.052800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.058446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.064665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.070741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.077724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.090582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.096120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.102335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.108980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.114687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.123121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.129660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.135506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.141297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.147408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.171163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.177198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.184552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.234696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:32.525206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:32.532425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.599116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.605260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.621090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.627316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45560","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [431566734db4859c1eef1f90f69e289c772af138aa943dda9b3895b932510bf1] <==
	2025/11/24 13:15:44 GCP Auth Webhook started!
	2025/11/24 13:15:51 Ready to marshal response ...
	2025/11/24 13:15:51 Ready to write response ...
	2025/11/24 13:15:51 Ready to marshal response ...
	2025/11/24 13:15:51 Ready to write response ...
	2025/11/24 13:15:51 Ready to marshal response ...
	2025/11/24 13:15:51 Ready to write response ...
	2025/11/24 13:16:00 Ready to marshal response ...
	2025/11/24 13:16:00 Ready to write response ...
	2025/11/24 13:16:00 Ready to marshal response ...
	2025/11/24 13:16:00 Ready to write response ...
	2025/11/24 13:16:05 Ready to marshal response ...
	2025/11/24 13:16:05 Ready to write response ...
	2025/11/24 13:16:08 Ready to marshal response ...
	2025/11/24 13:16:08 Ready to write response ...
	2025/11/24 13:16:11 Ready to marshal response ...
	2025/11/24 13:16:11 Ready to write response ...
	2025/11/24 13:16:12 Ready to marshal response ...
	2025/11/24 13:16:12 Ready to write response ...
	2025/11/24 13:16:28 Ready to marshal response ...
	2025/11/24 13:16:28 Ready to write response ...
	2025/11/24 13:18:37 Ready to marshal response ...
	2025/11/24 13:18:37 Ready to write response ...
	
	
	==> kernel <==
	 13:18:38 up  2:01,  0 user,  load average: 0.19, 0.71, 1.05
	Linux addons-715644 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f] <==
	I1124 13:16:30.877378       1 main.go:301] handling current node
	I1124 13:16:40.877558       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:40.877596       1 main.go:301] handling current node
	I1124 13:16:50.884987       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:50.885018       1 main.go:301] handling current node
	I1124 13:17:00.877479       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:00.877506       1 main.go:301] handling current node
	I1124 13:17:10.877480       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:10.877506       1 main.go:301] handling current node
	I1124 13:17:20.877817       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:20.877852       1 main.go:301] handling current node
	I1124 13:17:30.877775       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:30.877810       1 main.go:301] handling current node
	I1124 13:17:40.883001       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:40.883036       1 main.go:301] handling current node
	I1124 13:17:50.886085       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:17:50.886118       1 main.go:301] handling current node
	I1124 13:18:00.885957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:00.885994       1 main.go:301] handling current node
	I1124 13:18:10.881963       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:10.881992       1 main.go:301] handling current node
	I1124 13:18:20.876832       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:20.876908       1 main.go:301] handling current node
	I1124 13:18:30.877155       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:18:30.877192       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5] <==
	W1124 13:14:58.627281       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:11.441492       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.441536       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:11.441531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.441657       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:11.459520       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.459555       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:11.463095       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.463129       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:17.070835       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 13:15:17.070924       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 13:15:17.071351       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	E1124 13:15:17.073017       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	E1124 13:15:17.078145       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	E1124 13:15:17.099195       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	I1124 13:15:17.173340       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 13:15:59.805759       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40680: use of closed network connection
	E1124 13:15:59.949293       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40694: use of closed network connection
	I1124 13:16:11.985900       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1124 13:16:12.167606       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.99.117.203"}
	I1124 13:16:16.162216       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1124 13:18:37.259839       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.165.244"}
	
	
	==> kube-controller-manager [8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf] <==
	I1124 13:14:28.584217       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:14:28.584237       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:14:28.584300       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:14:28.584308       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:14:28.584433       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:14:28.584603       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:14:28.584628       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:14:28.584658       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:14:28.584681       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:14:28.585020       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:14:28.585079       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 13:14:28.585098       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:14:28.585480       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:14:28.585580       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:14:28.589903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:14:28.593212       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:14:28.606444       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 13:14:58.593209       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 13:14:58.593344       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 13:14:58.593386       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 13:14:58.612630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 13:14:58.615767       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 13:14:58.694124       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:14:58.716649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:15:13.539448       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30] <==
	I1124 13:14:30.358053       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:14:30.518446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:14:30.622233       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:14:30.624727       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:14:30.625005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:14:30.885381       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:14:30.885467       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:14:30.912577       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:14:30.928978       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:14:30.942517       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:14:30.985243       1 config.go:200] "Starting service config controller"
	I1124 13:14:30.985331       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:14:30.985367       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:14:30.985373       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:14:30.985389       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:14:30.985394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:14:30.986261       1 config.go:309] "Starting node config controller"
	I1124 13:14:30.986312       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:14:30.986343       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:14:31.087145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:14:31.087945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:14:31.090068       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6] <==
	E1124 13:14:21.615330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:14:21.615331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:14:21.615328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:14:21.615431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:14:21.615438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:14:21.616290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:14:21.616325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:14:21.616325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:14:21.616368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:14:21.616397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:14:21.616492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:14:21.616542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:14:21.616527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:14:21.616513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:14:22.420604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:14:22.476575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:14:22.530322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:14:22.558154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:14:22.604306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:14:22.722419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:14:22.724281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:14:22.750209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:14:22.770176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:14:22.835417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 13:14:23.312262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.037758    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tgs8v\" (UniqueName: \"kubernetes.io/projected/e91a20fe-0475-4c8c-a503-87137014b961-kube-api-access-tgs8v\") pod \"e91a20fe-0475-4c8c-a503-87137014b961\" (UID: \"e91a20fe-0475-4c8c-a503-87137014b961\") "
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.037789    1307 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e91a20fe-0475-4c8c-a503-87137014b961-gcp-creds\") pod \"e91a20fe-0475-4c8c-a503-87137014b961\" (UID: \"e91a20fe-0475-4c8c-a503-87137014b961\") "
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.037932    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e91a20fe-0475-4c8c-a503-87137014b961-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e91a20fe-0475-4c8c-a503-87137014b961" (UID: "e91a20fe-0475-4c8c-a503-87137014b961"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.037995    1307 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e91a20fe-0475-4c8c-a503-87137014b961-gcp-creds\") on node \"addons-715644\" DevicePath \"\""
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.040274    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91a20fe-0475-4c8c-a503-87137014b961-kube-api-access-tgs8v" (OuterVolumeSpecName: "kube-api-access-tgs8v") pod "e91a20fe-0475-4c8c-a503-87137014b961" (UID: "e91a20fe-0475-4c8c-a503-87137014b961"). InnerVolumeSpecName "kube-api-access-tgs8v". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.041865    1307 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^c8c40074-c937-11f0-aa4c-fe605ef89cce" (OuterVolumeSpecName: "task-pv-storage") pod "e91a20fe-0475-4c8c-a503-87137014b961" (UID: "e91a20fe-0475-4c8c-a503-87137014b961"). InnerVolumeSpecName "pvc-c3bebb1f-9408-47fd-8fbb-a7a818b89227". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.138389    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tgs8v\" (UniqueName: \"kubernetes.io/projected/e91a20fe-0475-4c8c-a503-87137014b961-kube-api-access-tgs8v\") on node \"addons-715644\" DevicePath \"\""
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.138440    1307 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-c3bebb1f-9408-47fd-8fbb-a7a818b89227\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c8c40074-c937-11f0-aa4c-fe605ef89cce\") on node \"addons-715644\" "
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.142476    1307 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-c3bebb1f-9408-47fd-8fbb-a7a818b89227" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^c8c40074-c937-11f0-aa4c-fe605ef89cce") on node "addons-715644"
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.238885    1307 reconciler_common.go:299] "Volume detached for volume \"pvc-c3bebb1f-9408-47fd-8fbb-a7a818b89227\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^c8c40074-c937-11f0-aa4c-fe605ef89cce\") on node \"addons-715644\" DevicePath \"\""
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.434294    1307 scope.go:117] "RemoveContainer" containerID="04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0"
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.443619    1307 scope.go:117] "RemoveContainer" containerID="04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0"
	Nov 24 13:16:36 addons-715644 kubelet[1307]: E1124 13:16:36.445180    1307 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0\": container with ID starting with 04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0 not found: ID does not exist" containerID="04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0"
	Nov 24 13:16:36 addons-715644 kubelet[1307]: I1124 13:16:36.445233    1307 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0"} err="failed to get container status \"04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0\": rpc error: code = NotFound desc = could not find container \"04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0\": container with ID starting with 04feb1070bd1e35a5250fe5cc80c9d3a3be9225596ee33eae1e2ded9564ddaa0 not found: ID does not exist"
	Nov 24 13:16:37 addons-715644 kubelet[1307]: I1124 13:16:37.905817    1307 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91a20fe-0475-4c8c-a503-87137014b961" path="/var/lib/kubelet/pods/e91a20fe-0475-4c8c-a503-87137014b961/volumes"
	Nov 24 13:16:38 addons-715644 kubelet[1307]: I1124 13:16:38.904037    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hxftx" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:16:41 addons-715644 kubelet[1307]: I1124 13:16:41.904095    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-h8tqs" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:16:47 addons-715644 kubelet[1307]: I1124 13:16:47.903771    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kx44z" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:16:55 addons-715644 kubelet[1307]: I1124 13:16:55.903945    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-x6s72" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:17:50 addons-715644 kubelet[1307]: I1124 13:17:50.903111    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kx44z" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:17:55 addons-715644 kubelet[1307]: I1124 13:17:55.903559    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hxftx" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:18:01 addons-715644 kubelet[1307]: I1124 13:18:01.903372    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-h8tqs" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:18:12 addons-715644 kubelet[1307]: I1124 13:18:12.903644    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-6b586f9694-x6s72" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:18:37 addons-715644 kubelet[1307]: I1124 13:18:37.336301    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/096a889c-1b51-4d11-a500-1f686d3330d1-gcp-creds\") pod \"hello-world-app-5d498dc89-pc4vt\" (UID: \"096a889c-1b51-4d11-a500-1f686d3330d1\") " pod="default/hello-world-app-5d498dc89-pc4vt"
	Nov 24 13:18:37 addons-715644 kubelet[1307]: I1124 13:18:37.336349    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcrj6\" (UniqueName: \"kubernetes.io/projected/096a889c-1b51-4d11-a500-1f686d3330d1-kube-api-access-kcrj6\") pod \"hello-world-app-5d498dc89-pc4vt\" (UID: \"096a889c-1b51-4d11-a500-1f686d3330d1\") " pod="default/hello-world-app-5d498dc89-pc4vt"
	
	
	==> storage-provisioner [80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd] <==
	W1124 13:18:14.681307       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:16.683734       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:16.687473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:18.690274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:18.693839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:20.696679       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:20.700473       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:22.703542       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:22.708201       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:24.710783       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:24.714707       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:26.717827       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:26.721205       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:28.723641       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:28.728123       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:30.730864       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:30.734611       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:32.737749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:32.741246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:34.743713       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:34.747356       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:36.749565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:36.752784       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:38.755169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:18:38.759306       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-715644 -n addons-715644
helpers_test.go:269: (dbg) Run:  kubectl --context addons-715644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-715644 describe pod ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-715644 describe pod ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4: exit status 1 (55.001608ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-gq29m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ds6p4" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-715644 describe pod ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (241.780025ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:18:39.550637  367684 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:18:39.550871  367684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:39.550879  367684 out.go:374] Setting ErrFile to fd 2...
	I1124 13:18:39.550883  367684 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:39.551107  367684 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:18:39.551350  367684 mustload.go:66] Loading cluster: addons-715644
	I1124 13:18:39.551665  367684 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:39.551680  367684 addons.go:622] checking whether the cluster is paused
	I1124 13:18:39.551761  367684 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:39.551773  367684 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:18:39.552233  367684 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:18:39.569045  367684 ssh_runner.go:195] Run: systemctl --version
	I1124 13:18:39.569097  367684 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:18:39.586204  367684 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:18:39.683860  367684 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:18:39.683955  367684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:18:39.711947  367684 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:18:39.711971  367684 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:18:39.711976  367684 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:18:39.711980  367684 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:18:39.711983  367684 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:18:39.711988  367684 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:18:39.711994  367684 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:18:39.711998  367684 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:18:39.712003  367684 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:18:39.712010  367684 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:18:39.712019  367684 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:18:39.712023  367684 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:18:39.712028  367684 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:18:39.712036  367684 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:18:39.712041  367684 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:18:39.712055  367684 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:18:39.712062  367684 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:18:39.712069  367684 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:18:39.712073  367684 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:18:39.712076  367684 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:18:39.712079  367684 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:18:39.712082  367684 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:18:39.712090  367684 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:18:39.712095  367684 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:18:39.712103  367684 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:18:39.712108  367684 cri.go:89] found id: ""
	I1124 13:18:39.712151  367684 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:18:39.725830  367684 out.go:203] 
	W1124 13:18:39.727016  367684 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:18:39.727058  367684 out.go:285] * 
	* 
	W1124 13:18:39.731379  367684 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:18:39.732544  367684 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable ingress --alsologtostderr -v=1: exit status 11 (242.525399ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:18:39.792824  367747 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:18:39.792975  367747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:39.792986  367747 out.go:374] Setting ErrFile to fd 2...
	I1124 13:18:39.792994  367747 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:18:39.793196  367747 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:18:39.793502  367747 mustload.go:66] Loading cluster: addons-715644
	I1124 13:18:39.793861  367747 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:39.793882  367747 addons.go:622] checking whether the cluster is paused
	I1124 13:18:39.793995  367747 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:18:39.794012  367747 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:18:39.794395  367747 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:18:39.811512  367747 ssh_runner.go:195] Run: systemctl --version
	I1124 13:18:39.811566  367747 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:18:39.828140  367747 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:18:39.926993  367747 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:18:39.927051  367747 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:18:39.955097  367747 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:18:39.955124  367747 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:18:39.955131  367747 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:18:39.955136  367747 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:18:39.955142  367747 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:18:39.955147  367747 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:18:39.955152  367747 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:18:39.955157  367747 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:18:39.955165  367747 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:18:39.955172  367747 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:18:39.955179  367747 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:18:39.955182  367747 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:18:39.955184  367747 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:18:39.955187  367747 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:18:39.955190  367747 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:18:39.955196  367747 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:18:39.955200  367747 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:18:39.955205  367747 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:18:39.955207  367747 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:18:39.955210  367747 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:18:39.955216  367747 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:18:39.955219  367747 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:18:39.955222  367747 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:18:39.955235  367747 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:18:39.955238  367747 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:18:39.955240  367747 cri.go:89] found id: ""
	I1124 13:18:39.955284  367747 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:18:39.968864  367747 out.go:203] 
	W1124 13:18:39.970024  367747 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:39Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:18:39Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:18:39.970045  367747 out.go:285] * 
	* 
	W1124 13:18:39.973941  367747 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:18:39.975312  367747 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (148.22s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-j5p27" [39c5f66e-392f-4cbf-b589-5019ccf98282] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003429938s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (272.520458ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:14.376423  363480 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:14.376711  363480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:14.376723  363480 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:14.376736  363480 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:14.377046  363480 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:14.377382  363480 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:14.377795  363480 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:14.377816  363480 addons.go:622] checking whether the cluster is paused
	I1124 13:16:14.377965  363480 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:14.377987  363480 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:14.378419  363480 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:14.400233  363480 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:14.400575  363480 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:14.422067  363480 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:14.524276  363480 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:14.524362  363480 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:14.556793  363480 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:14.556817  363480 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:14.556824  363480 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:14.556829  363480 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:14.556832  363480 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:14.556838  363480 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:14.556843  363480 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:14.556848  363480 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:14.556853  363480 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:14.556861  363480 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:14.556869  363480 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:14.556873  363480 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:14.556882  363480 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:14.556898  363480 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:14.556904  363480 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:14.556913  363480 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:14.556921  363480 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:14.556928  363480 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:14.556932  363480 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:14.556937  363480 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:14.556941  363480 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:14.556948  363480 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:14.556954  363480 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:14.556958  363480 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:14.556961  363480 cri.go:89] found id: ""
	I1124 13:16:14.557000  363480 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:14.571734  363480 out.go:203] 
	W1124 13:16:14.572834  363480 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:14Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:14.572857  363480 out.go:285] * 
	* 
	W1124 13:16:14.579229  363480 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:14.580380  363480 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.28s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.3s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.550038ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002703122s
addons_test.go:463: (dbg) Run:  kubectl --context addons-715644 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (241.59262ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:11.567746  362906 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:11.568029  362906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:11.568039  362906 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:11.568043  362906 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:11.568197  362906 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:11.568502  362906 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:11.568856  362906 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:11.568873  362906 addons.go:622] checking whether the cluster is paused
	I1124 13:16:11.568978  362906 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:11.568993  362906 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:11.569332  362906 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:11.586166  362906 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:11.586223  362906 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:11.602723  362906 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:11.702140  362906 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:11.702250  362906 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:11.729731  362906 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:11.729774  362906 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:11.729781  362906 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:11.729784  362906 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:11.729788  362906 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:11.729791  362906 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:11.729794  362906 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:11.729797  362906 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:11.729800  362906 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:11.729812  362906 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:11.729818  362906 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:11.729821  362906 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:11.729824  362906 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:11.729827  362906 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:11.729830  362906 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:11.729837  362906 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:11.729843  362906 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:11.729848  362906 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:11.729850  362906 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:11.729853  362906 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:11.729856  362906 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:11.729858  362906 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:11.729861  362906 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:11.729867  362906 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:11.729870  362906 cri.go:89] found id: ""
	I1124 13:16:11.729925  362906 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:11.742742  362906 out.go:203] 
	W1124 13:16:11.743653  362906 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:11Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:11.743691  362906 out.go:285] * 
	* 
	W1124 13:16:11.748347  362906 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:11.750107  362906 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.30s)

                                                
                                    
x
+
TestAddons/parallel/CSI (34.47s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1124 13:16:02.775932  351593 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1124 13:16:02.779277  351593 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1124 13:16:02.779312  351593 kapi.go:107] duration metric: took 3.432366ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.444705ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-715644 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-715644 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [ea31e7a2-be80-41cd-b615-184642850166] Pending
helpers_test.go:352: "task-pv-pod" [ea31e7a2-be80-41cd-b615-184642850166] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [ea31e7a2-be80-41cd-b615-184642850166] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.055712264s
addons_test.go:572: (dbg) Run:  kubectl --context addons-715644 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-715644 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-715644 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-715644 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-715644 delete pod task-pv-pod: (1.157713661s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-715644 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-715644 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-715644 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e91a20fe-0475-4c8c-a503-87137014b961] Pending
helpers_test.go:352: "task-pv-pod-restore" [e91a20fe-0475-4c8c-a503-87137014b961] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod-restore" [e91a20fe-0475-4c8c-a503-87137014b961] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003599073s
addons_test.go:614: (dbg) Run:  kubectl --context addons-715644 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-715644 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-715644 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (239.699468ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:36.823541  365032 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:36.823766  365032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:36.823779  365032 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:36.823783  365032 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:36.823988  365032 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:36.824225  365032 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:36.824524  365032 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:36.824539  365032 addons.go:622] checking whether the cluster is paused
	I1124 13:16:36.824613  365032 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:36.824625  365032 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:36.825006  365032 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:36.841344  365032 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:36.841396  365032 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:36.857251  365032 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:36.955956  365032 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:36.956045  365032 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:36.984485  365032 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:16:36.984512  365032 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:36.984516  365032 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:36.984519  365032 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:36.984522  365032 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:36.984527  365032 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:36.984530  365032 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:36.984532  365032 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:36.984535  365032 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:36.984549  365032 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:36.984558  365032 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:36.984563  365032 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:36.984571  365032 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:36.984575  365032 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:36.984580  365032 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:36.984593  365032 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:36.984600  365032 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:36.984607  365032 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:36.984611  365032 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:36.984616  365032 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:36.984622  365032 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:36.984626  365032 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:36.984633  365032 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:36.984638  365032 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:36.984646  365032 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:36.984651  365032 cri.go:89] found id: ""
	I1124 13:16:36.984711  365032 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:36.998068  365032 out.go:203] 
	W1124 13:16:36.999196  365032 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:36Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:36.999212  365032 out.go:285] * 
	* 
	W1124 13:16:37.003131  365032 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:37.004218  365032 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (238.095349ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:37.063506  365094 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:37.063741  365094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:37.063750  365094 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:37.063754  365094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:37.063932  365094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:37.064178  365094 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:37.064525  365094 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:37.064540  365094 addons.go:622] checking whether the cluster is paused
	I1124 13:16:37.064621  365094 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:37.064633  365094 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:37.065007  365094 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:37.081474  365094 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:37.081515  365094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:37.097349  365094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:37.195866  365094 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:37.195965  365094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:37.222825  365094 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:16:37.222845  365094 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:37.222850  365094 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:37.222853  365094 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:37.222856  365094 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:37.222860  365094 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:37.222862  365094 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:37.222865  365094 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:37.222868  365094 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:37.222873  365094 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:37.222878  365094 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:37.222881  365094 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:37.222884  365094 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:37.222898  365094 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:37.222903  365094 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:37.222910  365094 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:37.222920  365094 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:37.222924  365094 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:37.222927  365094 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:37.222930  365094 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:37.222933  365094 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:37.222936  365094 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:37.222939  365094 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:37.222942  365094 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:37.222944  365094 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:37.222947  365094 cri.go:89] found id: ""
	I1124 13:16:37.222982  365094 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:37.235935  365094 out.go:203] 
	W1124 13:16:37.237187  365094 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:37.237207  365094 out.go:285] * 
	* 
	W1124 13:16:37.241363  365094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:37.242638  365094 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (34.47s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.58s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-715644 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-715644 --alsologtostderr -v=1: exit status 11 (286.458083ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:00.280861  361200 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:00.281023  361200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:00.281035  361200 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:00.281042  361200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:00.281339  361200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:00.281722  361200 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:00.282141  361200 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:00.282164  361200 addons.go:622] checking whether the cluster is paused
	I1124 13:16:00.282287  361200 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:00.282305  361200 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:00.282784  361200 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:00.301213  361200 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:00.301271  361200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:00.318255  361200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:00.419337  361200 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:00.419439  361200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:00.449103  361200 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:00.449126  361200 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:00.449133  361200 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:00.449137  361200 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:00.449141  361200 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:00.449145  361200 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:00.449148  361200 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:00.449150  361200 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:00.449153  361200 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:00.449159  361200 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:00.449161  361200 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:00.449165  361200 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:00.449170  361200 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:00.449174  361200 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:00.449178  361200 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:00.449185  361200 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:00.449195  361200 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:00.449202  361200 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:00.449207  361200 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:00.449211  361200 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:00.449219  361200 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:00.449230  361200 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:00.449234  361200 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:00.449239  361200 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:00.449244  361200 cri.go:89] found id: ""
	I1124 13:16:00.449288  361200 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:00.464964  361200 out.go:203] 
	W1124 13:16:00.466640  361200 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:00Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:00Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:00.466663  361200 out.go:285] * 
	* 
	W1124 13:16:00.472118  361200 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:00.473404  361200 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-715644 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-715644
helpers_test.go:243: (dbg) docker inspect addons-715644:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768",
	        "Created": "2025-11-24T13:14:06.670171194Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 353602,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:14:06.70214882Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/hostname",
	        "HostsPath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/hosts",
	        "LogPath": "/var/lib/docker/containers/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768/5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768-json.log",
	        "Name": "/addons-715644",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-715644:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-715644",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "5d903f1f5c35e8ffb34bf9574da2741bdfd2ee1aa57d5cebb162725d11b79768",
	                "LowerDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406/merged",
	                "UpperDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406/diff",
	                "WorkDir": "/var/lib/docker/overlay2/31f2ea9bb4a41b900c3dcfe0f2b307129501eb87b5f288dac6764aae643d7406/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-715644",
	                "Source": "/var/lib/docker/volumes/addons-715644/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-715644",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-715644",
	                "name.minikube.sigs.k8s.io": "addons-715644",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "984b63ae3b6998d3d34778038e297e50dea66a43df6a0c148ff497e76e3d0173",
	            "SandboxKey": "/var/run/docker/netns/984b63ae3b69",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33143"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33144"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33147"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33145"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33146"
	                    }
	                ]
	            },
	            "Networks": {
	                "addons-715644": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "812ceb5bd489f198e66065cdde08eb9cb5e60b9e51f6b0e99123e2b983afdcf3",
	                    "EndpointID": "b4c7a61c550f4214a83e09cbe3642a6ccc1a5a993b98e97ce5be544c4a3081a7",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "0a:cf:00:bd:1c:74",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-715644",
	                        "5d903f1f5c35"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-715644 -n addons-715644
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-715644 logs -n 25: (1.113786174s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-053089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-053089   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-053089                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-053089   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ start   │ -o=json --download-only -p download-only-176855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-176855   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-176855                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-176855   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-053089                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-053089   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-176855                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-176855   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ start   │ --download-only -p download-docker-849908 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-849908 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ -p download-docker-849908                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-849908 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ start   │ --download-only -p binary-mirror-545470 --alsologtostderr --binary-mirror http://127.0.0.1:44271 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-545470   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ -p binary-mirror-545470                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-545470   │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ addons  │ enable dashboard -p addons-715644                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-715644          │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ addons  │ disable dashboard -p addons-715644                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-715644          │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ start   │ -p addons-715644 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-715644          │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:15 UTC │
	│ addons  │ addons-715644 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-715644          │ jenkins │ v1.37.0 │ 24 Nov 25 13:15 UTC │                     │
	│ addons  │ addons-715644 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-715644          │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	│ addons  │ enable headlamp -p addons-715644 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-715644          │ jenkins │ v1.37.0 │ 24 Nov 25 13:16 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:43
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:43.853072  352949 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:43.853306  352949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:43.853316  352949 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:43.853322  352949 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:43.853842  352949 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:13:43.854698  352949 out.go:368] Setting JSON to false
	I1124 13:13:43.855597  352949 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6971,"bootTime":1763983053,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:13:43.855681  352949 start.go:143] virtualization: kvm guest
	I1124 13:13:43.857229  352949 out.go:179] * [addons-715644] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:13:43.858597  352949 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:13:43.858605  352949 notify.go:221] Checking for updates...
	I1124 13:13:43.861197  352949 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:43.862263  352949 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:13:43.863302  352949 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:13:43.864321  352949 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:13:43.865302  352949 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:13:43.866468  352949 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:43.887843  352949 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:13:43.887993  352949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:43.942603  352949 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 13:13:43.933195472 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:43.942742  352949 docker.go:319] overlay module found
	I1124 13:13:43.944231  352949 out.go:179] * Using the docker driver based on user configuration
	I1124 13:13:43.945222  352949 start.go:309] selected driver: docker
	I1124 13:13:43.945233  352949 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:43.945243  352949 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:13:43.945764  352949 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:44.003907  352949 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-24 13:13:43.99409835 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:44.004143  352949 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:44.004421  352949 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:13:44.005883  352949 out.go:179] * Using Docker driver with root privileges
	I1124 13:13:44.006940  352949 cni.go:84] Creating CNI manager for ""
	I1124 13:13:44.007029  352949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:13:44.007046  352949 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:13:44.007122  352949 start.go:353] cluster config:
	{Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1124 13:13:44.008281  352949 out.go:179] * Starting "addons-715644" primary control-plane node in "addons-715644" cluster
	I1124 13:13:44.009229  352949 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:13:44.010175  352949 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:13:44.011119  352949 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:13:44.011145  352949 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:13:44.011151  352949 cache.go:65] Caching tarball of preloaded images
	I1124 13:13:44.011207  352949 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:13:44.011227  352949 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:13:44.011236  352949 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:13:44.011562  352949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/config.json ...
	I1124 13:13:44.011595  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/config.json: {Name:mkb5f591b550421bc01d9518e6a72a508d786dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:13:44.026290  352949 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f to local cache
	I1124 13:13:44.026392  352949 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory
	I1124 13:13:44.026407  352949 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local cache directory, skipping pull
	I1124 13:13:44.026412  352949 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in cache, skipping pull
	I1124 13:13:44.026421  352949 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f as a tarball
	I1124 13:13:44.026425  352949 cache.go:172] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from local cache
	I1124 13:13:56.067733  352949 cache.go:174] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f from cached tarball
	I1124 13:13:56.067772  352949 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:13:56.067841  352949 start.go:360] acquireMachinesLock for addons-715644: {Name:mk09735476b717614bfd96b379af3529b0f6a051 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:13:56.067960  352949 start.go:364] duration metric: took 97.197µs to acquireMachinesLock for "addons-715644"
	I1124 13:13:56.067990  352949 start.go:93] Provisioning new machine with config: &{Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:13:56.068069  352949 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:13:56.069691  352949 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1124 13:13:56.069956  352949 start.go:159] libmachine.API.Create for "addons-715644" (driver="docker")
	I1124 13:13:56.069991  352949 client.go:173] LocalClient.Create starting
	I1124 13:13:56.070091  352949 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:13:56.184274  352949 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:13:56.218806  352949 cli_runner.go:164] Run: docker network inspect addons-715644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:13:56.234342  352949 cli_runner.go:211] docker network inspect addons-715644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:13:56.234406  352949 network_create.go:284] running [docker network inspect addons-715644] to gather additional debugging logs...
	I1124 13:13:56.234431  352949 cli_runner.go:164] Run: docker network inspect addons-715644
	W1124 13:13:56.249128  352949 cli_runner.go:211] docker network inspect addons-715644 returned with exit code 1
	I1124 13:13:56.249153  352949 network_create.go:287] error running [docker network inspect addons-715644]: docker network inspect addons-715644: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-715644 not found
	I1124 13:13:56.249170  352949 network_create.go:289] output of [docker network inspect addons-715644]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-715644 not found
	
	** /stderr **
	I1124 13:13:56.249241  352949 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:13:56.265199  352949 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ba49d0}
	I1124 13:13:56.265252  352949 network_create.go:124] attempt to create docker network addons-715644 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1124 13:13:56.265303  352949 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-715644 addons-715644
	I1124 13:13:56.309737  352949 network_create.go:108] docker network addons-715644 192.168.49.0/24 created
	I1124 13:13:56.309766  352949 kic.go:121] calculated static IP "192.168.49.2" for the "addons-715644" container
	I1124 13:13:56.309831  352949 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:13:56.324350  352949 cli_runner.go:164] Run: docker volume create addons-715644 --label name.minikube.sigs.k8s.io=addons-715644 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:13:56.340577  352949 oci.go:103] Successfully created a docker volume addons-715644
	I1124 13:13:56.340637  352949 cli_runner.go:164] Run: docker run --rm --name addons-715644-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-715644 --entrypoint /usr/bin/test -v addons-715644:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:14:02.404337  352949 cli_runner.go:217] Completed: docker run --rm --name addons-715644-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-715644 --entrypoint /usr/bin/test -v addons-715644:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib: (6.063644909s)
	I1124 13:14:02.404375  352949 oci.go:107] Successfully prepared a docker volume addons-715644
	I1124 13:14:02.404424  352949 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:02.404438  352949 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:14:02.404499  352949 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-715644:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:14:06.599593  352949 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-715644:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.195023164s)
	I1124 13:14:06.599628  352949 kic.go:203] duration metric: took 4.195185794s to extract preloaded images to volume ...
	W1124 13:14:06.599703  352949 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:14:06.599746  352949 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:14:06.599791  352949 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:14:06.655585  352949 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-715644 --name addons-715644 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-715644 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-715644 --network addons-715644 --ip 192.168.49.2 --volume addons-715644:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:14:06.924593  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Running}}
	I1124 13:14:06.943231  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:06.959799  352949 cli_runner.go:164] Run: docker exec addons-715644 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:14:07.000871  352949 oci.go:144] the created container "addons-715644" has a running status.
	I1124 13:14:07.000924  352949 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa...
	I1124 13:14:07.057848  352949 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:14:07.085706  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:07.103909  352949 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:14:07.103940  352949 kic_runner.go:114] Args: [docker exec --privileged addons-715644 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:14:07.153586  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:07.173279  352949 machine.go:94] provisionDockerMachine start ...
	I1124 13:14:07.173416  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:07.192844  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:07.193239  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:07.193263  352949 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:14:07.194053  352949 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:38264->127.0.0.1:33143: read: connection reset by peer
	I1124 13:14:10.336212  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-715644
	
	I1124 13:14:10.336244  352949 ubuntu.go:182] provisioning hostname "addons-715644"
	I1124 13:14:10.336314  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.353089  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:10.353359  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:10.353373  352949 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-715644 && echo "addons-715644" | sudo tee /etc/hostname
	I1124 13:14:10.500427  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-715644
	
	I1124 13:14:10.500501  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.517607  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:10.517819  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:10.517836  352949 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-715644' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-715644/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-715644' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:14:10.656744  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:14:10.656777  352949 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:14:10.656809  352949 ubuntu.go:190] setting up certificates
	I1124 13:14:10.656831  352949 provision.go:84] configureAuth start
	I1124 13:14:10.656904  352949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-715644
	I1124 13:14:10.673100  352949 provision.go:143] copyHostCerts
	I1124 13:14:10.673159  352949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:14:10.673267  352949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:14:10.673326  352949 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:14:10.673372  352949 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.addons-715644 san=[127.0.0.1 192.168.49.2 addons-715644 localhost minikube]
	I1124 13:14:10.709635  352949 provision.go:177] copyRemoteCerts
	I1124 13:14:10.709683  352949 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:14:10.709724  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.725459  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:10.824258  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:14:10.842806  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 13:14:10.859233  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:14:10.875249  352949 provision.go:87] duration metric: took 218.402556ms to configureAuth
	I1124 13:14:10.875272  352949 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:14:10.875430  352949 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:10.875549  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:10.892045  352949 main.go:143] libmachine: Using SSH client type: native
	I1124 13:14:10.892246  352949 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33143 <nil> <nil>}
	I1124 13:14:10.892261  352949 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:14:11.168289  352949 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:14:11.168315  352949 machine.go:97] duration metric: took 3.995006612s to provisionDockerMachine
	I1124 13:14:11.168329  352949 client.go:176] duration metric: took 15.098328866s to LocalClient.Create
	I1124 13:14:11.168352  352949 start.go:167] duration metric: took 15.098397897s to libmachine.API.Create "addons-715644"
	I1124 13:14:11.168361  352949 start.go:293] postStartSetup for "addons-715644" (driver="docker")
	I1124 13:14:11.168369  352949 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:14:11.168439  352949 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:14:11.168485  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.184779  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.286101  352949 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:14:11.289416  352949 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:14:11.289449  352949 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:14:11.289460  352949 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:14:11.289509  352949 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:14:11.289531  352949 start.go:296] duration metric: took 121.165506ms for postStartSetup
	I1124 13:14:11.289809  352949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-715644
	I1124 13:14:11.306043  352949 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/config.json ...
	I1124 13:14:11.306258  352949 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:14:11.306304  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.322081  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.418314  352949 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:14:11.422551  352949 start.go:128] duration metric: took 15.354467204s to createHost
	I1124 13:14:11.422572  352949 start.go:83] releasing machines lock for "addons-715644", held for 15.354597577s
	I1124 13:14:11.422620  352949 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-715644
	I1124 13:14:11.438530  352949 ssh_runner.go:195] Run: cat /version.json
	I1124 13:14:11.438575  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.438650  352949 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:14:11.438734  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:11.455653  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.456204  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:11.603820  352949 ssh_runner.go:195] Run: systemctl --version
	I1124 13:14:11.609866  352949 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:14:11.642004  352949 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:14:11.646231  352949 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:14:11.646283  352949 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:14:11.670256  352949 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:14:11.670279  352949 start.go:496] detecting cgroup driver to use...
	I1124 13:14:11.670305  352949 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:14:11.670341  352949 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:14:11.684860  352949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:14:11.695829  352949 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:14:11.695876  352949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:14:11.710493  352949 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:14:11.727039  352949 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:14:11.805432  352949 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:14:11.889185  352949 docker.go:234] disabling docker service ...
	I1124 13:14:11.889238  352949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:14:11.906513  352949 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:14:11.917760  352949 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:14:11.997162  352949 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:14:12.073235  352949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:14:12.084183  352949 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:14:12.096903  352949 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:14:12.096975  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.106365  352949 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:14:12.106417  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.114230  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.122280  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.130141  352949 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:14:12.137320  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.145035  352949 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.157014  352949 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:14:12.164775  352949 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:14:12.171484  352949 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:14:12.178421  352949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:12.251114  352949 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:14:12.375877  352949 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:14:12.375973  352949 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:14:12.379708  352949 start.go:564] Will wait 60s for crictl version
	I1124 13:14:12.379772  352949 ssh_runner.go:195] Run: which crictl
	I1124 13:14:12.383255  352949 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:14:12.406179  352949 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:14:12.406268  352949 ssh_runner.go:195] Run: crio --version
	I1124 13:14:12.432345  352949 ssh_runner.go:195] Run: crio --version
	I1124 13:14:12.460361  352949 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:14:12.461448  352949 cli_runner.go:164] Run: docker network inspect addons-715644 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:14:12.477332  352949 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1124 13:14:12.481115  352949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:12.490727  352949 kubeadm.go:884] updating cluster {Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:14:12.490858  352949 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:14:12.490940  352949 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:12.522061  352949 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:12.522080  352949 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:14:12.522116  352949 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:14:12.547861  352949 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:14:12.547879  352949 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:14:12.547900  352949 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1124 13:14:12.548021  352949 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-715644 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:14:12.548089  352949 ssh_runner.go:195] Run: crio config
	I1124 13:14:12.591119  352949 cni.go:84] Creating CNI manager for ""
	I1124 13:14:12.591139  352949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:12.591157  352949 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:14:12.591179  352949 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-715644 NodeName:addons-715644 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:14:12.591309  352949 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-715644"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:14:12.591366  352949 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:14:12.599058  352949 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:14:12.599116  352949 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:14:12.606317  352949 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 13:14:12.617777  352949 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:14:12.631701  352949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 13:14:12.642972  352949 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:14:12.646219  352949 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:14:12.655094  352949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:12.731814  352949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:14:12.752757  352949 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644 for IP: 192.168.49.2
	I1124 13:14:12.752776  352949 certs.go:195] generating shared ca certs ...
	I1124 13:14:12.752793  352949 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.752917  352949 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:14:12.840736  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt ...
	I1124 13:14:12.840761  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt: {Name:mkce1262ae281136b1dd62caba3163658cacaba9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.840918  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key ...
	I1124 13:14:12.840930  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key: {Name:mk393d7fd776167e6c04ca0ef96f76563f922aba Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.841003  352949 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:14:12.907222  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt ...
	I1124 13:14:12.907244  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt: {Name:mkd68573e62099628351083591bcfc80d3c6f763 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.907366  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key ...
	I1124 13:14:12.907376  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key: {Name:mk59b10792858e63a60454559819b9d0f6fa8b4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:12.907436  352949 certs.go:257] generating profile certs ...
	I1124 13:14:12.907491  352949 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.key
	I1124 13:14:12.907504  352949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt with IP's: []
	I1124 13:14:13.031714  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt ...
	I1124 13:14:13.031732  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: {Name:mke3aa47a8cb6947e96555de743329f99a5d82b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.031851  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.key ...
	I1124 13:14:13.031861  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.key: {Name:mk4f4b2ac4523e3659e5b2daaf0afaa4bb4ea022 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.031952  352949 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1
	I1124 13:14:13.031976  352949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1124 13:14:13.100156  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1 ...
	I1124 13:14:13.100173  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1: {Name:mke28e301648f310171720622b136d1bceea46ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.100269  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1 ...
	I1124 13:14:13.100280  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1: {Name:mk3fd3ee9011a1c205f4f9f94cbbf968defc546b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.100339  352949 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt.ec50c8b1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt
	I1124 13:14:13.100406  352949 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key.ec50c8b1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key
	I1124 13:14:13.100452  352949 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key
	I1124 13:14:13.100467  352949 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt with IP's: []
	I1124 13:14:13.378126  352949 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt ...
	I1124 13:14:13.378152  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt: {Name:mkd4ff4f81b50eccf5d3bea5af6baa43a518412b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.378291  352949 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key ...
	I1124 13:14:13.378304  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key: {Name:mkd9fc84243a34c10c818d9d1ec38eff074241d7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:13.378467  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:14:13.378504  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:14:13.378529  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:14:13.378554  352949 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:14:13.379176  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:14:13.396704  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:14:13.412924  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:14:13.429328  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:14:13.445181  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 13:14:13.461199  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:14:13.477680  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:14:13.495549  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:14:13.512412  352949 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:14:13.530370  352949 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:14:13.541613  352949 ssh_runner.go:195] Run: openssl version
	I1124 13:14:13.547188  352949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:14:13.556985  352949 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:13.560456  352949 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:13.560504  352949 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:14:13.593379  352949 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:14:13.601003  352949 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:14:13.604173  352949 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:14:13.604226  352949 kubeadm.go:401] StartCluster: {Name:addons-715644 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-715644 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:14:13.604311  352949 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:14:13.604357  352949 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:14:13.629087  352949 cri.go:89] found id: ""
	I1124 13:14:13.629146  352949 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:14:13.636291  352949 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:14:13.643383  352949 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:14:13.643440  352949 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:14:13.650505  352949 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:14:13.650529  352949 kubeadm.go:158] found existing configuration files:
	
	I1124 13:14:13.650559  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:14:13.657445  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:14:13.657486  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:14:13.664263  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:14:13.671039  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:14:13.671077  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:14:13.677668  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:14:13.684768  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:14:13.684802  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:14:13.691347  352949 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:14:13.698071  352949 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:14:13.698107  352949 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:14:13.704799  352949 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:14:13.739672  352949 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:14:13.739726  352949 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:14:13.758545  352949 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:14:13.758625  352949 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:14:13.758689  352949 kubeadm.go:319] OS: Linux
	I1124 13:14:13.758762  352949 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:14:13.758827  352949 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:14:13.758918  352949 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:14:13.758990  352949 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:14:13.759062  352949 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:14:13.759134  352949 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:14:13.759201  352949 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:14:13.759281  352949 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:14:13.811301  352949 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:14:13.811447  352949 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:14:13.811598  352949 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:14:13.817956  352949 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:14:13.820250  352949 out.go:252]   - Generating certificates and keys ...
	I1124 13:14:13.820337  352949 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:14:13.820419  352949 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:14:13.979821  352949 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:14:14.103757  352949 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:14:14.268116  352949 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:14:14.693704  352949 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:14:15.218681  352949 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:14:15.218830  352949 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-715644 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:15.306494  352949 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:14:15.306610  352949 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-715644 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1124 13:14:15.519049  352949 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:14:16.050571  352949 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:14:16.508260  352949 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:14:16.508713  352949 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:14:17.130476  352949 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:14:17.300980  352949 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:14:17.531181  352949 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:14:18.094485  352949 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:14:18.137664  352949 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:14:18.138146  352949 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:14:18.141613  352949 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:14:18.143338  352949 out.go:252]   - Booting up control plane ...
	I1124 13:14:18.143422  352949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:14:18.143732  352949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:14:18.144495  352949 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:14:18.157250  352949 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:14:18.157407  352949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:14:18.163305  352949 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:14:18.163523  352949 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:14:18.163575  352949 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:14:18.259686  352949 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:14:18.259843  352949 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:14:19.760348  352949 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.500781462s
	I1124 13:14:19.763186  352949 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:14:19.763304  352949 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1124 13:14:19.763424  352949 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:14:19.763546  352949 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:14:21.299378  352949 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.533577331s
	I1124 13:14:21.618017  352949 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.854821937s
	I1124 13:14:23.264809  352949 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501617195s
	I1124 13:14:23.275838  352949 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:14:23.283407  352949 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:14:23.290793  352949 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:14:23.291102  352949 kubeadm.go:319] [mark-control-plane] Marking the node addons-715644 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:14:23.297555  352949 kubeadm.go:319] [bootstrap-token] Using token: za5myl.fnbisrs7rdfrxqnj
	I1124 13:14:23.299509  352949 out.go:252]   - Configuring RBAC rules ...
	I1124 13:14:23.299671  352949 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:14:23.301657  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:14:23.306403  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:14:23.308580  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:14:23.310682  352949 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:14:23.313044  352949 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:14:23.671684  352949 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:14:24.083215  352949 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:14:24.670783  352949 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:14:24.671685  352949 kubeadm.go:319] 
	I1124 13:14:24.671793  352949 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:14:24.671811  352949 kubeadm.go:319] 
	I1124 13:14:24.671977  352949 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:14:24.671988  352949 kubeadm.go:319] 
	I1124 13:14:24.672022  352949 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:14:24.672126  352949 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:14:24.672217  352949 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:14:24.672235  352949 kubeadm.go:319] 
	I1124 13:14:24.672320  352949 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:14:24.672329  352949 kubeadm.go:319] 
	I1124 13:14:24.672370  352949 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:14:24.672388  352949 kubeadm.go:319] 
	I1124 13:14:24.672475  352949 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:14:24.672577  352949 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:14:24.672683  352949 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:14:24.672692  352949 kubeadm.go:319] 
	I1124 13:14:24.672798  352949 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:14:24.672926  352949 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:14:24.672944  352949 kubeadm.go:319] 
	I1124 13:14:24.673072  352949 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token za5myl.fnbisrs7rdfrxqnj \
	I1124 13:14:24.673230  352949 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:14:24.673266  352949 kubeadm.go:319] 	--control-plane 
	I1124 13:14:24.673277  352949 kubeadm.go:319] 
	I1124 13:14:24.673379  352949 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:14:24.673388  352949 kubeadm.go:319] 
	I1124 13:14:24.673487  352949 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token za5myl.fnbisrs7rdfrxqnj \
	I1124 13:14:24.673619  352949 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:14:24.675412  352949 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:14:24.675554  352949 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:14:24.675580  352949 cni.go:84] Creating CNI manager for ""
	I1124 13:14:24.675587  352949 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:14:24.676941  352949 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:14:24.677984  352949 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:14:24.682435  352949 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:14:24.682451  352949 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:14:24.695115  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:14:24.878275  352949 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:14:24.878369  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:24.878383  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-715644 minikube.k8s.io/updated_at=2025_11_24T13_14_24_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=addons-715644 minikube.k8s.io/primary=true
	I1124 13:14:24.958992  352949 ops.go:34] apiserver oom_adj: -16
	I1124 13:14:24.959132  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:25.459220  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:25.959464  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:26.459273  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:26.959370  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:27.459432  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:27.959234  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:28.459212  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:28.960002  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:29.460125  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:29.959518  352949 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:14:30.021632  352949 kubeadm.go:1114] duration metric: took 5.143325124s to wait for elevateKubeSystemPrivileges
	I1124 13:14:30.021680  352949 kubeadm.go:403] duration metric: took 16.417449219s to StartCluster
	I1124 13:14:30.021723  352949 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:30.021843  352949 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:14:30.022474  352949 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:14:30.023364  352949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:14:30.023390  352949 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:14:30.023470  352949 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1124 13:14:30.023625  352949 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:30.023639  352949 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-715644"
	I1124 13:14:30.023645  352949 addons.go:70] Setting yakd=true in profile "addons-715644"
	I1124 13:14:30.023666  352949 addons.go:239] Setting addon yakd=true in "addons-715644"
	I1124 13:14:30.023675  352949 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-715644"
	I1124 13:14:30.023689  352949 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-715644"
	I1124 13:14:30.023690  352949 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-715644"
	I1124 13:14:30.023707  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023715  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023720  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023752  352949 addons.go:70] Setting ingress=true in profile "addons-715644"
	I1124 13:14:30.023781  352949 addons.go:239] Setting addon ingress=true in "addons-715644"
	I1124 13:14:30.023814  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.023844  352949 addons.go:70] Setting storage-provisioner=true in profile "addons-715644"
	I1124 13:14:30.023869  352949 addons.go:239] Setting addon storage-provisioner=true in "addons-715644"
	I1124 13:14:30.023914  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024035  352949 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-715644"
	I1124 13:14:30.024087  352949 addons.go:70] Setting volumesnapshots=true in profile "addons-715644"
	I1124 13:14:30.024124  352949 addons.go:239] Setting addon volumesnapshots=true in "addons-715644"
	I1124 13:14:30.024133  352949 addons.go:70] Setting registry=true in profile "addons-715644"
	I1124 13:14:30.024159  352949 addons.go:239] Setting addon registry=true in "addons-715644"
	I1124 13:14:30.024175  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024177  352949 addons.go:70] Setting default-storageclass=true in profile "addons-715644"
	I1124 13:14:30.024234  352949 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-715644"
	I1124 13:14:30.024303  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024315  352949 addons.go:70] Setting gcp-auth=true in profile "addons-715644"
	I1124 13:14:30.024370  352949 mustload.go:66] Loading cluster: addons-715644
	I1124 13:14:30.024406  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024530  352949 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:14:30.024603  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024748  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024802  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024880  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024126  352949 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-715644"
	I1124 13:14:30.025341  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.026481  352949 out.go:179] * Verifying Kubernetes components...
	I1124 13:14:30.026579  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024303  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024028  352949 addons.go:70] Setting volcano=true in profile "addons-715644"
	I1124 13:14:30.027198  352949 addons.go:239] Setting addon volcano=true in "addons-715644"
	I1124 13:14:30.027249  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.024958  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.024060  352949 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-715644"
	I1124 13:14:30.027558  352949 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-715644"
	I1124 13:14:30.028045  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.028317  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.029557  352949 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:14:30.024072  352949 addons.go:70] Setting cloud-spanner=true in profile "addons-715644"
	I1124 13:14:30.031595  352949 addons.go:239] Setting addon cloud-spanner=true in "addons-715644"
	I1124 13:14:30.031641  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.032192  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025054  352949 addons.go:70] Setting inspektor-gadget=true in profile "addons-715644"
	I1124 13:14:30.033206  352949 addons.go:239] Setting addon inspektor-gadget=true in "addons-715644"
	I1124 13:14:30.033233  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.033707  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025111  352949 addons.go:70] Setting registry-creds=true in profile "addons-715644"
	I1124 13:14:30.033942  352949 addons.go:239] Setting addon registry-creds=true in "addons-715644"
	I1124 13:14:30.033984  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.034440  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025125  352949 addons.go:70] Setting metrics-server=true in profile "addons-715644"
	I1124 13:14:30.037043  352949 addons.go:239] Setting addon metrics-server=true in "addons-715644"
	I1124 13:14:30.037089  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.025148  352949 addons.go:70] Setting ingress-dns=true in profile "addons-715644"
	I1124 13:14:30.037518  352949 addons.go:239] Setting addon ingress-dns=true in "addons-715644"
	I1124 13:14:30.037559  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.025192  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.037830  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.039688  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.040389  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.067623  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.078785  352949 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1124 13:14:30.080040  352949 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:14:30.080104  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1124 13:14:30.080550  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.082597  352949 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1124 13:14:30.085253  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1124 13:14:30.085308  352949 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1124 13:14:30.085971  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.086149  352949 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1124 13:14:30.087156  352949 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:14:30.087177  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1124 13:14:30.087222  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.091527  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1124 13:14:30.091881  352949 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1124 13:14:30.092547  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1124 13:14:30.092949  352949 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1124 13:14:30.093004  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.093766  352949 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:14:30.093780  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1124 13:14:30.093831  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.094031  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1124 13:14:30.096357  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1124 13:14:30.098556  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1124 13:14:30.098705  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.14.0
	I1124 13:14:30.099707  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:14:30.100711  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1124 13:14:30.101769  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1124 13:14:30.101884  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:14:30.102938  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1124 13:14:30.103141  352949 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:14:30.103155  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1124 13:14:30.103210  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.106050  352949 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.45
	I1124 13:14:30.106095  352949 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:14:30.106373  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1124 13:14:30.107526  352949 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1124 13:14:30.107593  352949 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-715644"
	I1124 13:14:30.113397  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.112998  352949 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:14:30.114422  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:14:30.114477  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.113045  352949 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1124 13:14:30.114719  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1124 13:14:30.114762  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.115798  352949 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:14:30.115835  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1124 13:14:30.115867  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.116008  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.116819  352949 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1124 13:14:30.117937  352949 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1124 13:14:30.118070  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1124 13:14:30.118084  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1124 13:14:30.118148  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.118929  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1124 13:14:30.118944  352949 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1124 13:14:30.118984  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	W1124 13:14:30.127090  352949 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1124 13:14:30.151569  352949 addons.go:239] Setting addon default-storageclass=true in "addons-715644"
	I1124 13:14:30.151653  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:30.151870  352949 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1124 13:14:30.151902  352949 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1124 13:14:30.152533  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.154032  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:30.155362  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.156778  352949 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:14:30.156797  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1124 13:14:30.156847  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.157294  352949 out.go:179]   - Using image docker.io/registry:3.0.0
	I1124 13:14:30.158648  352949 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1124 13:14:30.158710  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1124 13:14:30.159392  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.165469  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.169558  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.169652  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.175749  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.176158  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.181050  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.201739  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.202987  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.203302  352949 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1124 13:14:30.204507  352949 out.go:179]   - Using image docker.io/busybox:stable
	I1124 13:14:30.208796  352949 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:14:30.208816  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1124 13:14:30.208880  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.209837  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.211980  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	W1124 13:14:30.214809  352949 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:14:30.214842  352949 retry.go:31] will retry after 363.076161ms: ssh: handshake failed: EOF
	I1124 13:14:30.215877  352949 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:14:30.216031  352949 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:14:30.216202  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:30.217107  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.220743  352949 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:14:30.239873  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:30.253780  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	W1124 13:14:30.254883  352949 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1124 13:14:30.254957  352949 retry.go:31] will retry after 215.076815ms: ssh: handshake failed: EOF
	I1124 13:14:30.270352  352949 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:14:30.328341  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1124 13:14:30.334848  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1124 13:14:30.367436  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1124 13:14:30.378381  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1124 13:14:30.378405  352949 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1124 13:14:30.381479  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1124 13:14:30.381546  352949 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1124 13:14:30.381634  352949 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1124 13:14:30.384512  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1124 13:14:30.390266  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1124 13:14:30.390331  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1124 13:14:30.391642  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1124 13:14:30.392143  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1124 13:14:30.395534  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1124 13:14:30.395547  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1124 13:14:30.404643  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1124 13:14:30.414806  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:14:30.417018  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1124 13:14:30.417036  352949 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1124 13:14:30.435872  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1124 13:14:30.435907  352949 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1124 13:14:30.438437  352949 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1124 13:14:30.438493  352949 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1124 13:14:30.447863  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1124 13:14:30.447881  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1124 13:14:30.464406  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1124 13:14:30.464475  352949 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1124 13:14:30.502244  352949 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1124 13:14:30.502291  352949 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1124 13:14:30.511373  352949 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:14:30.511400  352949 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1124 13:14:30.514197  352949 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:14:30.514215  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1124 13:14:30.518993  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1124 13:14:30.519052  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1124 13:14:30.556351  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1124 13:14:30.559526  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1124 13:14:30.559554  352949 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1124 13:14:30.561247  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1124 13:14:30.576813  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1124 13:14:30.576836  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1124 13:14:30.584062  352949 node_ready.go:35] waiting up to 6m0s for node "addons-715644" to be "Ready" ...
	I1124 13:14:30.584312  352949 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1124 13:14:30.596177  352949 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:14:30.596203  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1124 13:14:30.642379  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:14:30.646646  352949 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1124 13:14:30.646674  352949 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1124 13:14:30.700202  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:14:30.703390  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1124 13:14:30.703411  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1124 13:14:30.757934  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1124 13:14:30.757964  352949 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1124 13:14:30.790334  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1124 13:14:30.790360  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1124 13:14:30.813297  352949 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1124 13:14:30.813325  352949 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1124 13:14:30.826630  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1124 13:14:30.826712  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1124 13:14:30.858706  352949 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:14:30.858832  352949 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1124 13:14:30.882285  352949 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:14:30.882315  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1124 13:14:30.920395  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1124 13:14:30.943062  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1124 13:14:31.093803  352949 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-715644" context rescaled to 1 replicas
	I1124 13:14:31.521729  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.140134457s)
	I1124 13:14:31.521771  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml: (1.137227397s)
	I1124 13:14:31.521781  352949 addons.go:495] Verifying addon ingress=true in "addons-715644"
	I1124 13:14:31.521823  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.130126519s)
	I1124 13:14:31.521835  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.129675424s)
	I1124 13:14:31.521934  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.117258952s)
	I1124 13:14:31.522028  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.107196596s)
	I1124 13:14:31.522180  352949 addons.go:495] Verifying addon metrics-server=true in "addons-715644"
	I1124 13:14:31.523502  352949 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-715644 service yakd-dashboard -n yakd-dashboard
	
	I1124 13:14:31.523512  352949 out.go:179] * Verifying ingress addon...
	I1124 13:14:31.525324  352949 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1124 13:14:31.527543  352949 kapi.go:86] Found 0 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:14:31.955201  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.312770844s)
	W1124 13:14:31.955252  352949 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:14:31.955281  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.255043243s)
	I1124 13:14:31.955281  352949 retry.go:31] will retry after 351.390124ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1124 13:14:31.955520  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.035079932s)
	I1124 13:14:31.955554  352949 addons.go:495] Verifying addon csi-hostpath-driver=true in "addons-715644"
	I1124 13:14:31.955576  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (1.012359968s)
	I1124 13:14:31.955604  352949 addons.go:495] Verifying addon registry=true in "addons-715644"
	I1124 13:14:31.957089  352949 out.go:179] * Verifying registry addon...
	I1124 13:14:31.957089  352949 out.go:179] * Verifying csi-hostpath-driver addon...
	I1124 13:14:31.959430  352949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1124 13:14:31.959430  352949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1124 13:14:31.964965  352949 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:14:31.964980  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:31.966246  352949 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:14:31.966262  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:32.065791  352949 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1124 13:14:32.065811  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:32.307678  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1124 13:14:32.463086  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:32.463086  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:32.563335  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:32.586258  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:32.961791  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:32.961845  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:33.028051  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:33.462507  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:33.462545  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:33.562981  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:33.962452  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:33.962583  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:34.027851  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:34.463179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:34.463324  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:34.563795  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:34.586764  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:34.766384  352949 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.458659892s)
	I1124 13:14:34.962220  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:34.962299  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:35.028241  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:35.462723  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:35.462733  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:35.528036  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:35.962468  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:35.962562  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:36.027766  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:36.462128  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:36.462235  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:36.562518  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:36.962056  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:36.962113  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:37.027979  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:37.086469  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:37.462568  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:37.462589  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:37.563281  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:37.686121  352949 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1124 13:14:37.686198  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:37.702762  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:37.807388  352949 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1124 13:14:37.818879  352949 addons.go:239] Setting addon gcp-auth=true in "addons-715644"
	I1124 13:14:37.818945  352949 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:14:37.819275  352949 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:14:37.835233  352949 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1124 13:14:37.835284  352949 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:14:37.851315  352949 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:14:37.948367  352949 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.4
	I1124 13:14:37.949532  352949 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1124 13:14:37.950561  352949 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1124 13:14:37.950577  352949 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1124 13:14:37.962550  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:37.962687  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:37.963502  352949 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1124 13:14:37.963522  352949 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1124 13:14:37.975350  352949 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:14:37.975364  352949 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1124 13:14:37.986998  352949 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1124 13:14:38.028467  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:38.279109  352949 addons.go:495] Verifying addon gcp-auth=true in "addons-715644"
	I1124 13:14:38.280290  352949 out.go:179] * Verifying gcp-auth addon...
	I1124 13:14:38.282042  352949 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1124 13:14:38.285528  352949 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1124 13:14:38.285543  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:38.461951  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:38.461997  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:38.527905  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:38.785142  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:38.962635  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:38.962761  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:39.027754  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:39.088429  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:39.284828  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:39.462734  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:39.462759  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:39.527840  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:39.785131  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:39.962458  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:39.962464  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:40.027425  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:40.284905  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:40.462464  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:40.462548  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:40.527653  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:40.784767  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:40.962269  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:40.962269  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:41.028148  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:41.285046  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:41.462240  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:41.462328  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:41.528241  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:41.586248  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:41.784853  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:41.962280  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:41.962371  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:42.028353  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:42.283994  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:42.462564  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:42.462564  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:42.527591  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:42.784528  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:42.961924  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:42.962009  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:43.027949  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:43.284806  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:43.462215  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:43.462302  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:43.528314  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:43.784488  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:43.962303  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:43.962454  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:44.028848  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:44.086648  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:44.285310  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:44.462470  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:44.462486  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:44.527395  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:44.784906  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:44.962282  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:44.962332  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:45.028246  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:45.284583  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:45.462270  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:45.462420  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:45.527474  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:45.784978  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:45.962206  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:45.962284  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:46.028178  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:46.086880  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:46.285368  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:46.462962  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:46.463070  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:46.527763  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:46.785179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:46.962655  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:46.962738  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:47.027882  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:47.285051  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:47.462467  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:47.462538  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:47.528687  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:47.784922  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:47.962443  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:47.962515  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:48.029172  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:48.284063  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:48.462306  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:48.462315  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:48.528301  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:48.585954  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:48.785333  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:48.962921  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:48.962989  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:49.027844  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:49.284860  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:49.462401  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:49.462554  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:49.527359  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:49.784730  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:49.962092  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:49.962119  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:50.028259  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:50.284541  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:50.461922  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:50.462047  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:50.528034  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:50.586864  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:50.785229  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:50.962711  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:50.962882  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:51.027687  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:51.284664  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:51.461848  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:51.461901  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:51.527763  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:51.784977  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:51.962283  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:51.962436  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:52.027518  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:52.284799  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:52.462369  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:52.462434  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:52.528516  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:52.784630  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:52.962096  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:52.962222  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:53.028238  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:53.085796  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:53.285305  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:53.462678  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:53.462701  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:53.527686  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:53.784634  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:53.962029  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:53.962045  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:54.028121  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:54.284580  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:54.462283  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:54.462404  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:54.527396  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:54.784671  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:54.962553  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:54.962623  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:55.027781  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:55.086954  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:55.285172  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:55.462634  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:55.462708  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:55.527991  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:55.785320  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:55.962624  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:55.962735  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:56.027898  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:56.285244  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:56.462490  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:56.462592  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:56.527524  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:56.784580  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:56.961821  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:56.961941  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:57.027502  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:57.284362  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:57.462451  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:57.462533  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:57.527641  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:57.586596  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:57.784916  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:57.961990  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:57.962129  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:58.028083  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:58.284938  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:58.462187  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:58.462263  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:58.528119  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:58.785135  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:58.962544  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:58.962767  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:59.027752  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:14:59.284746  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:59.462223  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:14:59.462234  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:59.528169  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:14:59.587037  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:14:59.784407  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:14:59.962816  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:14:59.962904  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:00.027854  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:00.284392  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:00.462754  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:00.462758  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:00.527707  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:00.785040  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:00.962409  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:00.962440  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:01.028454  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:01.284675  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:01.461944  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:01.462024  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:01.528099  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:01.785378  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:01.962444  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:01.962453  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:02.027336  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:02.086392  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:02.284872  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:02.462223  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:02.462346  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:02.528227  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:02.784811  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:02.962242  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:02.962307  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:03.028401  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:03.284849  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:03.462294  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:03.462391  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:03.527517  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:03.784612  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:03.962037  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:03.962087  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:04.028048  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:04.086775  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:04.285076  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:04.462575  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:04.462661  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:04.527786  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:04.784817  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:04.962263  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:04.962308  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:05.028237  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:05.285160  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:05.462786  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:05.462968  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:05.528323  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:05.784697  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:05.961824  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:05.962014  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:06.027671  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:06.284786  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:06.462000  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:06.462033  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:06.527907  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:06.586675  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:06.785118  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:06.962138  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:06.962319  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:07.028228  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:07.284170  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:07.462494  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:07.462600  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:07.527565  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:07.784975  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:07.962472  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:07.962520  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:08.027783  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:08.284926  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:08.462307  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:08.462388  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:08.527395  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:08.784638  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:08.962252  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:08.962274  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:09.028330  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:09.086315  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:09.284495  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:09.461584  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:09.461679  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:09.527612  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:09.784690  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:09.962182  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:09.962190  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:10.028269  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:10.284651  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:10.461984  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:10.462053  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:10.527857  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:10.785222  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:10.962372  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:10.962468  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:11.029520  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1124 13:15:11.086384  352949 node_ready.go:57] node "addons-715644" has "Ready":"False" status (will retry)
	I1124 13:15:11.285074  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:11.462875  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:11.465090  352949 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1124 13:15:11.465124  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:11.530275  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:11.587174  352949 node_ready.go:49] node "addons-715644" is "Ready"
	I1124 13:15:11.587210  352949 node_ready.go:38] duration metric: took 41.003116201s for node "addons-715644" to be "Ready" ...
	I1124 13:15:11.587232  352949 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:15:11.587291  352949 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:15:11.602456  352949 api_server.go:72] duration metric: took 41.579028892s to wait for apiserver process to appear ...
	I1124 13:15:11.602488  352949 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:15:11.602510  352949 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1124 13:15:11.607083  352949 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1124 13:15:11.608095  352949 api_server.go:141] control plane version: v1.34.1
	I1124 13:15:11.608124  352949 api_server.go:131] duration metric: took 5.627713ms to wait for apiserver health ...
	I1124 13:15:11.608136  352949 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:15:11.611793  352949 system_pods.go:59] 20 kube-system pods found
	I1124 13:15:11.611829  352949 system_pods.go:61] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:11.611841  352949 system_pods.go:61] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:11.611853  352949 system_pods.go:61] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:11.611866  352949 system_pods.go:61] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:11.611878  352949 system_pods.go:61] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:11.611897  352949 system_pods.go:61] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:11.611903  352949 system_pods.go:61] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:11.611908  352949 system_pods.go:61] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:11.611914  352949 system_pods.go:61] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:11.611925  352949 system_pods.go:61] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:11.611931  352949 system_pods.go:61] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:11.611937  352949 system_pods.go:61] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:11.611948  352949 system_pods.go:61] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:11.611955  352949 system_pods.go:61] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending
	I1124 13:15:11.611966  352949 system_pods.go:61] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:11.611975  352949 system_pods.go:61] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:11.611984  352949 system_pods.go:61] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending
	I1124 13:15:11.611994  352949 system_pods.go:61] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.612006  352949 system_pods.go:61] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.612023  352949 system_pods.go:61] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:11.612034  352949 system_pods.go:74] duration metric: took 3.890111ms to wait for pod list to return data ...
	I1124 13:15:11.612046  352949 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:15:11.614094  352949 default_sa.go:45] found service account: "default"
	I1124 13:15:11.614113  352949 default_sa.go:55] duration metric: took 2.058637ms for default service account to be created ...
	I1124 13:15:11.614122  352949 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:15:11.619342  352949 system_pods.go:86] 20 kube-system pods found
	I1124 13:15:11.619378  352949 system_pods.go:89] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:11.619389  352949 system_pods.go:89] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:11.619401  352949 system_pods.go:89] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:11.619409  352949 system_pods.go:89] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:11.619422  352949 system_pods.go:89] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:11.619432  352949 system_pods.go:89] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:11.619442  352949 system_pods.go:89] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:11.619448  352949 system_pods.go:89] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:11.619457  352949 system_pods.go:89] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:11.619466  352949 system_pods.go:89] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:11.619475  352949 system_pods.go:89] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:11.619484  352949 system_pods.go:89] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:11.619495  352949 system_pods.go:89] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:11.619501  352949 system_pods.go:89] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending
	I1124 13:15:11.619515  352949 system_pods.go:89] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:11.619526  352949 system_pods.go:89] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:11.619532  352949 system_pods.go:89] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending
	I1124 13:15:11.619544  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.619557  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.619567  352949 system_pods.go:89] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:11.619593  352949 retry.go:31] will retry after 238.25305ms: missing components: kube-dns
	I1124 13:15:11.785531  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:11.887749  352949 system_pods.go:86] 20 kube-system pods found
	I1124 13:15:11.887787  352949 system_pods.go:89] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:11.887798  352949 system_pods.go:89] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:15:11.887814  352949 system_pods.go:89] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:11.887824  352949 system_pods.go:89] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:11.887833  352949 system_pods.go:89] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:11.887846  352949 system_pods.go:89] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:11.887854  352949 system_pods.go:89] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:11.887860  352949 system_pods.go:89] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:11.887877  352949 system_pods.go:89] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:11.887918  352949 system_pods.go:89] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:11.887930  352949 system_pods.go:89] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:11.887936  352949 system_pods.go:89] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:11.887944  352949 system_pods.go:89] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:11.887953  352949 system_pods.go:89] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:11.887962  352949 system_pods.go:89] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:11.887970  352949 system_pods.go:89] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:11.887983  352949 system_pods.go:89] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:11.887991  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.887999  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:11.888007  352949 system_pods.go:89] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:15:11.888029  352949 retry.go:31] will retry after 313.084796ms: missing components: kube-dns
	I1124 13:15:11.985814  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:11.985861  352949 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1124 13:15:11.985874  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:12.028940  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:12.205709  352949 system_pods.go:86] 20 kube-system pods found
	I1124 13:15:12.205748  352949 system_pods.go:89] "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1124 13:15:12.205756  352949 system_pods.go:89] "coredns-66bc5c9577-8kqrg" [a8a7a050-acf2-455f-8a19-63ee9e9aee24] Running
	I1124 13:15:12.205765  352949 system_pods.go:89] "csi-hostpath-attacher-0" [9802f470-f9b0-4dce-ae14-a7a307ad6302] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1124 13:15:12.205772  352949 system_pods.go:89] "csi-hostpath-resizer-0" [51d376b9-9b34-4a04-83ac-7314ecf41dfd] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1124 13:15:12.205780  352949 system_pods.go:89] "csi-hostpathplugin-vghhv" [59812e95-9cdf-40cc-b25f-9e63c8e5157e] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1124 13:15:12.205786  352949 system_pods.go:89] "etcd-addons-715644" [dc5e506e-ade9-4f45-88ce-a00029f5bac0] Running
	I1124 13:15:12.205791  352949 system_pods.go:89] "kindnet-jb6km" [e9f0918e-0b2b-412d-b57f-5c00a40fada8] Running
	I1124 13:15:12.205799  352949 system_pods.go:89] "kube-apiserver-addons-715644" [d6823402-d724-4711-b547-3d7fe46a3013] Running
	I1124 13:15:12.205809  352949 system_pods.go:89] "kube-controller-manager-addons-715644" [d14f128b-31c4-456a-ba08-bfc2ff9c0460] Running
	I1124 13:15:12.205820  352949 system_pods.go:89] "kube-ingress-dns-minikube" [dac12abd-7cd5-4d8c-99e7-e64d99904007] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1124 13:15:12.205828  352949 system_pods.go:89] "kube-proxy-c7prv" [49ab568b-e2d2-447c-a415-05870090b63f] Running
	I1124 13:15:12.205834  352949 system_pods.go:89] "kube-scheduler-addons-715644" [69d8c857-de09-4f63-8c1f-5aa615f7dfc7] Running
	I1124 13:15:12.205844  352949 system_pods.go:89] "metrics-server-85b7d694d7-4fdfd" [1f2dc823-40d4-4194-b831-1c70bbcf7b66] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1124 13:15:12.205853  352949 system_pods.go:89] "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1124 13:15:12.205865  352949 system_pods.go:89] "registry-6b586f9694-x6s72" [0fc44edb-9f6c-414d-a733-43015903fde8] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1124 13:15:12.205877  352949 system_pods.go:89] "registry-creds-764b6fb674-4tmmd" [c1b81cb2-69d5-4d69-a8f2-14e4d4a88632] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1124 13:15:12.205902  352949 system_pods.go:89] "registry-proxy-kx44z" [98e78cf2-d459-4d88-8617-05ab22523a89] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1124 13:15:12.205915  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-7jfmk" [756a8f9b-ebff-4482-9f0a-48bc861b05c7] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:12.205928  352949 system_pods.go:89] "snapshot-controller-7d9fbc56b8-9lv6w" [acbfda98-591a-4a98-a1c9-313807b5cb1f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1124 13:15:12.205936  352949 system_pods.go:89] "storage-provisioner" [dbc5c46b-5b5e-4b7c-9f2b-0b773eb48153] Running
	I1124 13:15:12.205949  352949 system_pods.go:126] duration metric: took 591.819695ms to wait for k8s-apps to be running ...
	I1124 13:15:12.205963  352949 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:15:12.206015  352949 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:15:12.219097  352949 system_svc.go:56] duration metric: took 13.126375ms WaitForService to wait for kubelet
	I1124 13:15:12.219124  352949 kubeadm.go:587] duration metric: took 42.195702775s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:15:12.219152  352949 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:15:12.221562  352949 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:15:12.221586  352949 node_conditions.go:123] node cpu capacity is 8
	I1124 13:15:12.221604  352949 node_conditions.go:105] duration metric: took 2.446287ms to run NodePressure ...
	I1124 13:15:12.221616  352949 start.go:242] waiting for startup goroutines ...
	I1124 13:15:12.284948  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:12.462878  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:12.463071  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:12.529171  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:12.786248  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:12.964180  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:12.964312  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:13.029719  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:13.285869  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:13.463035  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:13.463065  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:13.528948  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:13.785860  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:13.963217  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:13.963413  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:14.029405  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:14.285023  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:14.463179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:14.463425  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:14.529232  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:14.785620  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:14.962753  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:14.962754  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:15.028691  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:15.285656  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:15.462506  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:15.462605  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:15.528449  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:15.785047  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:15.964194  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:15.964967  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:16.029263  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:16.287481  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:16.464683  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:16.464765  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:16.529052  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:16.786286  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:16.963548  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:16.963752  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:17.027907  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:17.285982  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:17.463140  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:17.463519  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:17.529090  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:17.786014  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:17.963095  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:17.963254  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:18.029046  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:18.286129  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:18.463373  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:18.463451  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:18.528853  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:18.786070  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:18.962649  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:18.962822  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:19.029098  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:19.303352  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:19.464041  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:19.464314  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:19.529511  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:19.785150  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:19.963170  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:19.963324  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.063560  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.285230  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:20.463298  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.463458  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:20.529258  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:20.785992  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:20.964256  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:20.966008  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.028729  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:21.285438  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:21.463664  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.463794  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:21.528675  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:21.785289  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:21.963994  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:21.964072  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.044093  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.284506  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:22.462314  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.462357  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:22.528550  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:22.785306  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:22.962713  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:22.962843  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.027915  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.285620  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:23.462477  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:23.462694  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:23.528054  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:23.785342  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:23.962352  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:23.962598  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.028595  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:24.286069  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.463669  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.463883  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:24.528384  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:24.785618  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:24.962989  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:24.963121  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.028651  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.285346  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.462929  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.463090  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:25.528620  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:25.785298  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:25.963336  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:25.963453  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.028972  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:26.287116  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.463168  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.463349  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:26.527656  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:26.785569  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:26.962731  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:26.962832  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.028655  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.285739  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.467262  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.467514  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:27.543490  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:27.785503  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:27.963068  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:27.963151  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.028418  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:28.285777  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.462253  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.462529  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:28.528824  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:28.785406  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:28.962102  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:28.962221  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.028217  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.284489  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.462611  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.462667  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:29.528778  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:29.786052  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:29.962648  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:29.962719  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.027507  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.285655  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.462531  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.462603  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:30.528213  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:30.784673  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:30.962502  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:30.962737  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.062587  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:31.284974  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.462582  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.462632  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:31.527901  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:31.785808  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:31.963050  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:31.963156  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.028566  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.285229  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.462622  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.462902  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:32.527774  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:32.789377  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:32.962911  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:32.963094  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.028378  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:33.285198  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.463479  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.463571  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:33.528200  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:33.784562  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:33.962448  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1124 13:15:33.962620  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.027845  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.285196  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.463299  352949 kapi.go:107] duration metric: took 1m2.503864265s to wait for kubernetes.io/minikube-addons=registry ...
	I1124 13:15:34.463489  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:34.528533  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:34.785494  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:34.964405  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.030277  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.286092  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.463233  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:35.528929  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:35.786261  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:35.963360  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.029409  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.329650  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.485244  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:36.528506  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:36.785123  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:36.963503  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.029118  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:37.285979  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.462954  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:37.528080  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:37.785454  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:37.962838  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.029205  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.286653  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.463192  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:38.529102  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:38.785164  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:38.963634  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.028267  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:39.286292  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.463831  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:39.528677  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:39.787057  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:39.963138  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.029002  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.284729  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.462455  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:40.529401  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:40.784787  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:40.962258  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.028422  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:41.285462  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.463179  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:41.528945  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:41.785535  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:41.962198  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.028859  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.286034  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.463225  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:42.529188  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:42.785058  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:42.962750  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1124 13:15:43.027820  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.285160  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:43.462838  352949 kapi.go:107] duration metric: took 1m11.503403005s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1124 13:15:43.528005  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:43.784603  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.029246  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.286751  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:44.528672  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:44.785231  352949 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1124 13:15:45.029497  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:45.285876  352949 kapi.go:107] duration metric: took 1m7.003829292s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1124 13:15:45.287029  352949 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-715644 cluster.
	I1124 13:15:45.288263  352949 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1124 13:15:45.289528  352949 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1124 13:15:45.530576  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.030191  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:46.528515  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.028928  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:47.529040  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.029242  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:48.528547  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.029035  352949 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1124 13:15:49.529062  352949 kapi.go:107] duration metric: took 1m18.003733484s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1124 13:15:49.530469  352949 out.go:179] * Enabled addons: registry-creds, amd-gpu-device-plugin, ingress-dns, inspektor-gadget, cloud-spanner, nvidia-device-plugin, storage-provisioner, metrics-server, yakd, storage-provisioner-rancher, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, gcp-auth, ingress
	I1124 13:15:49.531623  352949 addons.go:530] duration metric: took 1m19.508163981s for enable addons: enabled=[registry-creds amd-gpu-device-plugin ingress-dns inspektor-gadget cloud-spanner nvidia-device-plugin storage-provisioner metrics-server yakd storage-provisioner-rancher default-storageclass volumesnapshots registry csi-hostpath-driver gcp-auth ingress]
	I1124 13:15:49.531662  352949 start.go:247] waiting for cluster config update ...
	I1124 13:15:49.531685  352949 start.go:256] writing updated cluster config ...
	I1124 13:15:49.531946  352949 ssh_runner.go:195] Run: rm -f paused
	I1124 13:15:49.536016  352949 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:15:49.538533  352949 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8kqrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.542206  352949 pod_ready.go:94] pod "coredns-66bc5c9577-8kqrg" is "Ready"
	I1124 13:15:49.542225  352949 pod_ready.go:86] duration metric: took 3.673868ms for pod "coredns-66bc5c9577-8kqrg" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.543863  352949 pod_ready.go:83] waiting for pod "etcd-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.547179  352949 pod_ready.go:94] pod "etcd-addons-715644" is "Ready"
	I1124 13:15:49.547195  352949 pod_ready.go:86] duration metric: took 3.314923ms for pod "etcd-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.548817  352949 pod_ready.go:83] waiting for pod "kube-apiserver-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.551864  352949 pod_ready.go:94] pod "kube-apiserver-addons-715644" is "Ready"
	I1124 13:15:49.551881  352949 pod_ready.go:86] duration metric: took 3.04732ms for pod "kube-apiserver-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.553470  352949 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:49.939409  352949 pod_ready.go:94] pod "kube-controller-manager-addons-715644" is "Ready"
	I1124 13:15:49.939443  352949 pod_ready.go:86] duration metric: took 385.955009ms for pod "kube-controller-manager-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:50.140540  352949 pod_ready.go:83] waiting for pod "kube-proxy-c7prv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:50.539859  352949 pod_ready.go:94] pod "kube-proxy-c7prv" is "Ready"
	I1124 13:15:50.539906  352949 pod_ready.go:86] duration metric: took 399.318831ms for pod "kube-proxy-c7prv" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:50.740196  352949 pod_ready.go:83] waiting for pod "kube-scheduler-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:51.139228  352949 pod_ready.go:94] pod "kube-scheduler-addons-715644" is "Ready"
	I1124 13:15:51.139258  352949 pod_ready.go:86] duration metric: took 399.037371ms for pod "kube-scheduler-addons-715644" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:15:51.139275  352949 pod_ready.go:40] duration metric: took 1.603221221s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:15:51.184686  352949 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:15:51.186224  352949 out.go:179] * Done! kubectl is now configured to use "addons-715644" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:15:51 addons-715644 crio[779]: time="2025-11-24T13:15:51.995832486Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7036fddc-4934-4c28-b1a8-0388ede54752 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:15:51 addons-715644 crio[779]: time="2025-11-24T13:15:51.997308092Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.624286544Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=7036fddc-4934-4c28-b1a8-0388ede54752 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.624782073Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=19a3484a-d253-41f5-95c8-0b2916821ab6 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.626012036Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=56146872-dc0b-4e72-aac5-f226f355acfd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.628944647Z" level=info msg="Creating container: default/busybox/busybox" id=51d33d4d-af15-401b-bd17-6c011858ddcf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.629065004Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.634179932Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.634555501Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.667388759Z" level=info msg="Created container cdd07a7ef83b299d680d0fad99f6281c98e9a19a118fc9ba5003d61201409aa7: default/busybox/busybox" id=51d33d4d-af15-401b-bd17-6c011858ddcf name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.667854401Z" level=info msg="Starting container: cdd07a7ef83b299d680d0fad99f6281c98e9a19a118fc9ba5003d61201409aa7" id=ab718e85-31c3-4912-9fe9-65b3d851976b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:15:52 addons-715644 crio[779]: time="2025-11-24T13:15:52.669420725Z" level=info msg="Started container" PID=6294 containerID=cdd07a7ef83b299d680d0fad99f6281c98e9a19a118fc9ba5003d61201409aa7 description=default/busybox/busybox id=ab718e85-31c3-4912-9fe9-65b3d851976b name=/runtime.v1.RuntimeService/StartContainer sandboxID=0e1d54a5e14d26c9125a15ef31acb0b7677123e4c7cb23376f6ebbb4eee6f8b5
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.792171967Z" level=info msg="Running pod sandbox: local-path-storage/helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff/POD" id=2e237d28-5d69-445d-b7a6-65a2e58e1c0f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.792245876Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.799043915Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff Namespace:local-path-storage ID:55cf9ab9c61f99e546514c2876796348c963437d90375aafe7bbf592f793d909 UID:2549f93d-2fe1-4944-b569-6a369c700eff NetNS:/var/run/netns/d64195f3-2b5a-4491-9100-75e41422eb56 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009865d0}] Aliases:map[]}"
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.799073589Z" level=info msg="Adding pod local-path-storage_helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.808690252Z" level=info msg="Got pod network &{Name:helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff Namespace:local-path-storage ID:55cf9ab9c61f99e546514c2876796348c963437d90375aafe7bbf592f793d909 UID:2549f93d-2fe1-4944-b569-6a369c700eff NetNS:/var/run/netns/d64195f3-2b5a-4491-9100-75e41422eb56 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0009865d0}] Aliases:map[]}"
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.808835784Z" level=info msg="Checking pod local-path-storage_helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff for CNI network kindnet (type=ptp)"
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.809578944Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.810548231Z" level=info msg="Ran pod sandbox 55cf9ab9c61f99e546514c2876796348c963437d90375aafe7bbf592f793d909 with infra container: local-path-storage/helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff/POD" id=2e237d28-5d69-445d-b7a6-65a2e58e1c0f name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.811774438Z" level=info msg="Checking image status: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=014b8436-aa80-42ef-9bef-c24920ce7c9f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.811953442Z" level=info msg="Image docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 not found" id=014b8436-aa80-42ef-9bef-c24920ce7c9f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.812040141Z" level=info msg="Neither image nor artfiact docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79 found" id=014b8436-aa80-42ef-9bef-c24920ce7c9f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.812572168Z" level=info msg="Pulling image: docker.io/busybox:stable@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79" id=e2b435af-753e-4643-998e-dd3791ab5f2a name=/runtime.v1.ImageService/PullImage
	Nov 24 13:16:00 addons-715644 crio[779]: time="2025-11-24T13:16:00.814038245Z" level=info msg="Trying to access \"docker.io/library/busybox@sha256:3fbc632167424a6d997e74f52b878d7cc478225cffac6bc977eedfe51c7f4e79\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                        NAMESPACE
	cdd07a7ef83b2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   0e1d54a5e14d2       busybox                                    default
	cac2d22f42517       registry.k8s.io/ingress-nginx/controller@sha256:7f2b00bd369a972bfb09acfe8c2525b99caeeeb54ab71d2822343e8fd4222e27                             12 seconds ago       Running             controller                               0                   af8e7dbb1cc4b       ingress-nginx-controller-6c8bf45fb-n6vc7   ingress-nginx
	431566734db48       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 16 seconds ago       Running             gcp-auth                                 0                   5cfd206ad6fa9       gcp-auth-78565c9fb4-jllj4                  gcp-auth
	32b77a8342024       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          18 seconds ago       Running             csi-snapshotter                          0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	9946073c4dbc0       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          19 seconds ago       Running             csi-provisioner                          0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	97f3de9ff4a38       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            20 seconds ago       Running             liveness-probe                           0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	3492cc9269215       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           21 seconds ago       Running             hostpath                                 0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	d9134036f413d       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            22 seconds ago       Running             gadget                                   0                   46d8edf92cf66       gadget-j5p27                               gadget
	db3d376ec41b7       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                24 seconds ago       Running             node-driver-registrar                    0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	a9fad48eebd55       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     24 seconds ago       Running             nvidia-device-plugin-ctr                 0                   d119d41972e11       nvidia-device-plugin-daemonset-h8tqs       kube-system
	7f9d1b3fe4a90       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              27 seconds ago       Running             registry-proxy                           0                   2eddc137a3ac6       registry-proxy-kx44z                       kube-system
	de0cc746d3ed0       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     29 seconds ago       Running             amd-gpu-device-plugin                    0                   02c01f0506934       amd-gpu-device-plugin-hxftx                kube-system
	4c4b1aa9cfdbe       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   30 seconds ago       Exited              patch                                    0                   4292061af6982       gcp-auth-certs-patch-fwkkk                 gcp-auth
	fd3db7348c539       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   30 seconds ago       Exited              create                                   0                   7e7fd07fe3b84       gcp-auth-certs-create-czqhb                gcp-auth
	93fbc223db37d       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   30 seconds ago       Running             csi-external-health-monitor-controller   0                   da89e52a4842b       csi-hostpathplugin-vghhv                   kube-system
	a5ef3026f01f1       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             31 seconds ago       Running             local-path-provisioner                   0                   b12546325a8f8       local-path-provisioner-648f6765c9-7hnlx    local-path-storage
	744e5383d888f       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   32 seconds ago       Exited              patch                                    0                   8f8671d4b9e6c       ingress-nginx-admission-patch-ds6p4        ingress-nginx
	5949deef674ca       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:f016159150cb72d879e0d3b6852afbed68fe21d86be1e92c62ab7f56515287f5                   33 seconds ago       Exited              create                                   0                   5fca1df273341       ingress-nginx-admission-create-gq29m       ingress-nginx
	9be932139aaef       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             33 seconds ago       Running             csi-attacher                             0                   5d19ee3fadd15       csi-hostpath-attacher-0                    kube-system
	e91ca551d1e0e       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      35 seconds ago       Running             volume-snapshot-controller               0                   24ed0b6885160       snapshot-controller-7d9fbc56b8-7jfmk       kube-system
	edf878679786a       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      35 seconds ago       Running             volume-snapshot-controller               0                   7489f2f2032be       snapshot-controller-7d9fbc56b8-9lv6w       kube-system
	17cd086a0a854       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              36 seconds ago       Running             csi-resizer                              0                   2aee0aff8abfc       csi-hostpath-resizer-0                     kube-system
	33bdbf096e506       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           37 seconds ago       Running             registry                                 0                   aaff989c55fae       registry-6b586f9694-x6s72                  kube-system
	01ac83a9cfb43       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              38 seconds ago       Running             yakd                                     0                   91fbee87053a0       yakd-dashboard-5ff678cb9-jd5f7             yakd-dashboard
	bef94f1c94dd3       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               41 seconds ago       Running             minikube-ingress-dns                     0                   a28f02ad505a5       kube-ingress-dns-minikube                  kube-system
	83f5e4de5d194       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        46 seconds ago       Running             metrics-server                           0                   919e585bb6c2e       metrics-server-85b7d694d7-4fdfd            kube-system
	68304ee706d1f       gcr.io/cloud-spanner-emulator/emulator@sha256:22a4d5b0f97bd0c2ee20da342493c5a60e40b4d62ec20c174cb32ff4ee1f65bf                               47 seconds ago       Running             cloud-spanner-emulator                   0                   3cadee47b97c8       cloud-spanner-emulator-5bdddb765-dk2gw     default
	9fc9fbc51a1d5       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             49 seconds ago       Running             coredns                                  0                   99a478ee2f1bf       coredns-66bc5c9577-8kqrg                   kube-system
	80ca718552080       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             49 seconds ago       Running             storage-provisioner                      0                   e18d47fc14725       storage-provisioner                        kube-system
	3c0239d349ace       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   0181887543fda       kindnet-jb6km                              kube-system
	1cd2d69a4521d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   5732d1eea9eba       kube-proxy-c7prv                           kube-system
	f906d790e557c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   928929891d573       etcd-addons-715644                         kube-system
	8bd061f25cd27       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   f597e84326fc8       kube-controller-manager-addons-715644      kube-system
	73d6f909ae2dc       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   545d7f48a30bf       kube-scheduler-addons-715644               kube-system
	e080d87ce42a1       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   1e948645cfd17       kube-apiserver-addons-715644               kube-system
	
	
	==> coredns [9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e] <==
	[INFO] 10.244.0.17:34321 - 31216 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000122295s
	[INFO] 10.244.0.17:37180 - 48049 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000115604s
	[INFO] 10.244.0.17:37180 - 48234 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000161279s
	[INFO] 10.244.0.17:45191 - 28576 "AAAA IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000050927s
	[INFO] 10.244.0.17:45191 - 28271 "A IN registry.kube-system.svc.cluster.local.us-central1-a.c.k8s-minikube.internal. udp 94 false 512" NXDOMAIN qr,aa,rd,ra 204 0.000096391s
	[INFO] 10.244.0.17:43022 - 5168 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000042928s
	[INFO] 10.244.0.17:43022 - 5468 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000071615s
	[INFO] 10.244.0.17:58926 - 14603 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000034184s
	[INFO] 10.244.0.17:58926 - 14444 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000061926s
	[INFO] 10.244.0.17:37617 - 9688 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128023s
	[INFO] 10.244.0.17:37617 - 9914 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000171402s
	[INFO] 10.244.0.22:49313 - 44568 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000176544s
	[INFO] 10.244.0.22:47793 - 62951 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000247842s
	[INFO] 10.244.0.22:41952 - 20274 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001411s
	[INFO] 10.244.0.22:50006 - 24162 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.0001421s
	[INFO] 10.244.0.22:39280 - 23725 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000086554s
	[INFO] 10.244.0.22:34942 - 44377 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132247s
	[INFO] 10.244.0.22:46556 - 54642 "AAAA IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007460581s
	[INFO] 10.244.0.22:59028 - 31267 "A IN storage.googleapis.com.us-central1-a.c.k8s-minikube.internal. udp 89 false 1232" NXDOMAIN qr,rd,ra 188 0.007541779s
	[INFO] 10.244.0.22:52149 - 8979 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004224337s
	[INFO] 10.244.0.22:33442 - 18202 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.005592987s
	[INFO] 10.244.0.22:40026 - 63095 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.005546615s
	[INFO] 10.244.0.22:47433 - 26869 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.006547095s
	[INFO] 10.244.0.22:35025 - 363 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.00128106s
	[INFO] 10.244.0.22:40164 - 61138 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001543068s
	
	
	==> describe nodes <==
	Name:               addons-715644
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-715644
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=addons-715644
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_14_24_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-715644
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-715644"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:14:21 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-715644
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:15:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:15:55 +0000   Mon, 24 Nov 2025 13:14:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:15:55 +0000   Mon, 24 Nov 2025 13:14:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:15:55 +0000   Mon, 24 Nov 2025 13:14:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:15:55 +0000   Mon, 24 Nov 2025 13:15:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-715644
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                0d27353c-5710-4d14-a232-2bb0e65b7fcb
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (28 in total)
	  Namespace                   Name                                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-5bdddb765-dk2gw                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  gadget                      gadget-j5p27                                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  gcp-auth                    gcp-auth-78565c9fb4-jllj4                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         83s
	  ingress-nginx               ingress-nginx-controller-6c8bf45fb-n6vc7                      100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         90s
	  kube-system                 amd-gpu-device-plugin-hxftx                                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 coredns-66bc5c9577-8kqrg                                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     92s
	  kube-system                 csi-hostpath-attacher-0                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpath-resizer-0                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 csi-hostpathplugin-vghhv                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 etcd-addons-715644                                            100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         97s
	  kube-system                 kindnet-jb6km                                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      92s
	  kube-system                 kube-apiserver-addons-715644                                  250m (3%)     0 (0%)      0 (0%)           0 (0%)         99s
	  kube-system                 kube-controller-manager-addons-715644                         200m (2%)     0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 kube-ingress-dns-minikube                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-c7prv                                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-scheduler-addons-715644                                  100m (1%)     0 (0%)      0 (0%)           0 (0%)         98s
	  kube-system                 metrics-server-85b7d694d7-4fdfd                               100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         90s
	  kube-system                 nvidia-device-plugin-daemonset-h8tqs                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 registry-6b586f9694-x6s72                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 registry-creds-764b6fb674-4tmmd                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 registry-proxy-kx44z                                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kube-system                 snapshot-controller-7d9fbc56b8-7jfmk                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 snapshot-controller-7d9fbc56b8-9lv6w                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  local-path-storage          helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff    0 (0%)        0 (0%)      0 (0%)           0 (0%)         1s
	  local-path-storage          local-path-provisioner-648f6765c9-7hnlx                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-jd5f7                                0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 90s                  kube-proxy       
	  Normal  Starting                 102s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  102s (x2 over 102s)  kubelet          Node addons-715644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    102s (x2 over 102s)  kubelet          Node addons-715644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     102s                 kubelet          Node addons-715644 status is now: NodeHasSufficientPID
	  Normal  Starting                 98s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s                  kubelet          Node addons-715644 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s                  kubelet          Node addons-715644 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s                  kubelet          Node addons-715644 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           93s                  node-controller  Node addons-715644 event: Registered Node addons-715644 in Controller
	  Normal  NodeReady                50s                  kubelet          Node addons-715644 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000021] ll header: 00000000: 0a b9 9b 8d 41 09 ae 9f ab 92 86 38 08 00
	[Nov24 12:55] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff ae ac b8 98 4f 4e 08 06
	[  +0.001207] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 6a 02 51 c8 76 06 08 06
	[  +0.540469] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff d6 8f 3e db fb 0a 08 06
	[ +11.273868] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000010] ll header: 00000000: ff ff ff ff ff ff e2 37 d5 92 c7 55 08 06
	[  +0.005492] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000014] ll header: 00000000: ff ff ff ff ff ff 9e 3c 21 60 ec b4 08 06
	[  +6.086357] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 9a a1 53 18 7d 82 08 06
	[  +0.000331] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff d6 8f 3e db fb 0a 08 06
	[Nov24 12:56] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[ +13.197718] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 8a cb d3 55 d4 d9 08 06
	[  +0.000310] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 9e 3c 21 60 ec b4 08 06
	[Nov24 12:57] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	
	
	==> etcd [f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e] <==
	{"level":"warn","ts":"2025-11-24T13:14:21.052800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40554","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.058446Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.064665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.070741Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.077724Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40618","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.090582Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.096120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.102335Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40686","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.108980Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.114687Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.123121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.129660Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40770","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.135506Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40784","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.141297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.147408Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.171163Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.177198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40850","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.184552Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:21.234696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:32.525206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:32.532425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.599116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.605260Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.621090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:14:58.627316Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:45560","server-name":"","error":"EOF"}
	
	
	==> gcp-auth [431566734db4859c1eef1f90f69e289c772af138aa943dda9b3895b932510bf1] <==
	2025/11/24 13:15:44 GCP Auth Webhook started!
	2025/11/24 13:15:51 Ready to marshal response ...
	2025/11/24 13:15:51 Ready to write response ...
	2025/11/24 13:15:51 Ready to marshal response ...
	2025/11/24 13:15:51 Ready to write response ...
	2025/11/24 13:15:51 Ready to marshal response ...
	2025/11/24 13:15:51 Ready to write response ...
	2025/11/24 13:16:00 Ready to marshal response ...
	2025/11/24 13:16:00 Ready to write response ...
	2025/11/24 13:16:00 Ready to marshal response ...
	2025/11/24 13:16:00 Ready to write response ...
	
	
	==> kernel <==
	 13:16:01 up  1:58,  0 user,  load average: 2.62, 1.20, 1.25
	Linux addons-715644 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f] <==
	I1124 13:14:30.581504       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:14:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:14:30.877364       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:14:30.877384       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:14:30.877397       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:14:30.878350       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 13:15:00.877493       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 13:15:00.878790       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 13:15:00.878794       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 13:15:00.879061       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	I1124 13:15:02.478026       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:15:02.478051       1 metrics.go:72] Registering metrics
	I1124 13:15:02.478108       1 controller.go:711] "Syncing nftables rules"
	I1124 13:15:10.877766       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:15:10.877820       1 main.go:301] handling current node
	I1124 13:15:20.877269       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:15:20.877309       1 main.go:301] handling current node
	I1124 13:15:30.877127       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:15:30.877156       1 main.go:301] handling current node
	I1124 13:15:40.877259       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:15:40.877292       1 main.go:301] handling current node
	I1124 13:15:50.877492       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:15:50.877523       1 main.go:301] handling current node
	I1124 13:16:00.877470       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:16:00.877498       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5] <==
	I1124 13:14:38.229377       1 alloc.go:328] "allocated clusterIPs" service="gcp-auth/gcp-auth" clusterIPs={"IPv4":"10.100.35.53"}
	W1124 13:14:58.599071       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:14:58.605290       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:14:58.621048       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:14:58.627281       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1124 13:15:11.441492       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.441536       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:11.441531       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.441657       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:11.459520       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.459555       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:11.463095       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.100.35.53:443: connect: connection refused
	E1124 13:15:11.463129       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.100.35.53:443: connect: connection refused" logger="UnhandledError"
	W1124 13:15:17.070835       1 handler_proxy.go:99] no RequestInfo found in the context
	E1124 13:15:17.070924       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1124 13:15:17.071351       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	E1124 13:15:17.073017       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	E1124 13:15:17.078145       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	E1124 13:15:17.099195       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.107.119.34:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.107.119.34:443: connect: connection refused" logger="UnhandledError"
	I1124 13:15:17.173340       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1124 13:15:59.805759       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40680: use of closed network connection
	E1124 13:15:59.949293       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:40694: use of closed network connection
	
	
	==> kube-controller-manager [8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf] <==
	I1124 13:14:28.584217       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:14:28.584237       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:14:28.584300       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:14:28.584308       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:14:28.584433       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:14:28.584603       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:14:28.584628       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:14:28.584658       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:14:28.584681       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:14:28.585020       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:14:28.585079       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 13:14:28.585098       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:14:28.585480       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:14:28.585580       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:14:28.589903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:14:28.593212       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:14:28.606444       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 13:14:58.593209       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1124 13:14:58.593344       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1124 13:14:58.593386       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1124 13:14:58.612630       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1124 13:14:58.615767       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1124 13:14:58.694124       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:14:58.716649       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:15:13.539448       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30] <==
	I1124 13:14:30.358053       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:14:30.518446       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:14:30.622233       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:14:30.624727       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:14:30.625005       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:14:30.885381       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:14:30.885467       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:14:30.912577       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:14:30.928978       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:14:30.942517       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:14:30.985243       1 config.go:200] "Starting service config controller"
	I1124 13:14:30.985331       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:14:30.985367       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:14:30.985373       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:14:30.985389       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:14:30.985394       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:14:30.986261       1 config.go:309] "Starting node config controller"
	I1124 13:14:30.986312       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:14:30.986343       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:14:31.087145       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:14:31.087945       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:14:31.090068       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6] <==
	E1124 13:14:21.615330       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:14:21.615331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:14:21.615328       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:14:21.615431       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:14:21.615438       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:14:21.616290       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:14:21.616325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:14:21.616325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:14:21.616368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:14:21.616397       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:14:21.616492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:14:21.616542       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:14:21.616527       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:14:21.616513       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:14:22.420604       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:14:22.476575       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:14:22.530322       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:14:22.558154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:14:22.604306       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:14:22.722419       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:14:22.724281       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:14:22.750209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:14:22.770176       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:14:22.835417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 13:14:23.312262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:15:33 addons-715644 kubelet[1307]: I1124 13:15:33.468787    1307 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26hmf\" (UniqueName: \"kubernetes.io/projected/8dbe07ff-ff2c-48f7-9638-9e0354f8a798-kube-api-access-26hmf\") on node \"addons-715644\" DevicePath \"\""
	Nov 24 13:15:34 addons-715644 kubelet[1307]: I1124 13:15:34.126196    1307 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4292061af69820379fa0b8aa31d1928f056a388fa05390b678d709b05786ac9d"
	Nov 24 13:15:34 addons-715644 kubelet[1307]: I1124 13:15:34.128012    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kx44z" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:15:34 addons-715644 kubelet[1307]: I1124 13:15:34.128096    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-hxftx" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:15:34 addons-715644 kubelet[1307]: I1124 13:15:34.137608    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-kx44z" podStartSLOduration=1.304595266 podStartE2EDuration="23.137592958s" podCreationTimestamp="2025-11-24 13:15:11 +0000 UTC" firstStartedPulling="2025-11-24 13:15:11.882242647 +0000 UTC m=+48.059301763" lastFinishedPulling="2025-11-24 13:15:33.715240339 +0000 UTC m=+69.892299455" observedRunningTime="2025-11-24 13:15:34.137205532 +0000 UTC m=+70.314264668" watchObservedRunningTime="2025-11-24 13:15:34.137592958 +0000 UTC m=+70.314652093"
	Nov 24 13:15:35 addons-715644 kubelet[1307]: I1124 13:15:35.131698    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-kx44z" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:15:37 addons-715644 kubelet[1307]: I1124 13:15:37.141593    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-h8tqs" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:15:37 addons-715644 kubelet[1307]: I1124 13:15:37.152618    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/nvidia-device-plugin-daemonset-h8tqs" podStartSLOduration=1.446496168 podStartE2EDuration="26.152597534s" podCreationTimestamp="2025-11-24 13:15:11 +0000 UTC" firstStartedPulling="2025-11-24 13:15:11.88406174 +0000 UTC m=+48.061120869" lastFinishedPulling="2025-11-24 13:15:36.5901631 +0000 UTC m=+72.767222235" observedRunningTime="2025-11-24 13:15:37.152550795 +0000 UTC m=+73.329609931" watchObservedRunningTime="2025-11-24 13:15:37.152597534 +0000 UTC m=+73.329656672"
	Nov 24 13:15:38 addons-715644 kubelet[1307]: I1124 13:15:38.148664    1307 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-h8tqs" secret="" err="secret \"gcp-auth\" not found"
	Nov 24 13:15:40 addons-715644 kubelet[1307]: I1124 13:15:40.792233    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-j5p27" podStartSLOduration=66.696509134 podStartE2EDuration="1m9.792211224s" podCreationTimestamp="2025-11-24 13:14:31 +0000 UTC" firstStartedPulling="2025-11-24 13:15:36.279869683 +0000 UTC m=+72.456928801" lastFinishedPulling="2025-11-24 13:15:39.375571772 +0000 UTC m=+75.552630891" observedRunningTime="2025-11-24 13:15:40.171786149 +0000 UTC m=+76.348845287" watchObservedRunningTime="2025-11-24 13:15:40.792211224 +0000 UTC m=+76.969270369"
	Nov 24 13:15:40 addons-715644 kubelet[1307]: I1124 13:15:40.952222    1307 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 24 13:15:40 addons-715644 kubelet[1307]: I1124 13:15:40.952262    1307 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 24 13:15:43 addons-715644 kubelet[1307]: I1124 13:15:43.194537    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-vghhv" podStartSLOduration=1.231511652 podStartE2EDuration="32.194517201s" podCreationTimestamp="2025-11-24 13:15:11 +0000 UTC" firstStartedPulling="2025-11-24 13:15:11.867181429 +0000 UTC m=+48.044240548" lastFinishedPulling="2025-11-24 13:15:42.83018698 +0000 UTC m=+79.007246097" observedRunningTime="2025-11-24 13:15:43.192252325 +0000 UTC m=+79.369311482" watchObservedRunningTime="2025-11-24 13:15:43.194517201 +0000 UTC m=+79.371576336"
	Nov 24 13:15:43 addons-715644 kubelet[1307]: E1124 13:15:43.347015    1307 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 24 13:15:43 addons-715644 kubelet[1307]: E1124 13:15:43.347105    1307 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1b81cb2-69d5-4d69-a8f2-14e4d4a88632-gcr-creds podName:c1b81cb2-69d5-4d69-a8f2-14e4d4a88632 nodeName:}" failed. No retries permitted until 2025-11-24 13:16:15.347082271 +0000 UTC m=+111.524141404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/c1b81cb2-69d5-4d69-a8f2-14e4d4a88632-gcr-creds") pod "registry-creds-764b6fb674-4tmmd" (UID: "c1b81cb2-69d5-4d69-a8f2-14e4d4a88632") : secret "registry-creds-gcr" not found
	Nov 24 13:15:45 addons-715644 kubelet[1307]: I1124 13:15:45.213113    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-jllj4" podStartSLOduration=66.033078441 podStartE2EDuration="1m7.213090506s" podCreationTimestamp="2025-11-24 13:14:38 +0000 UTC" firstStartedPulling="2025-11-24 13:15:43.613584299 +0000 UTC m=+79.790643418" lastFinishedPulling="2025-11-24 13:15:44.793596358 +0000 UTC m=+80.970655483" observedRunningTime="2025-11-24 13:15:45.211456111 +0000 UTC m=+81.388515267" watchObservedRunningTime="2025-11-24 13:15:45.213090506 +0000 UTC m=+81.390149644"
	Nov 24 13:15:49 addons-715644 kubelet[1307]: I1124 13:15:49.224230    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-6c8bf45fb-n6vc7" podStartSLOduration=73.261693791 podStartE2EDuration="1m18.224213248s" podCreationTimestamp="2025-11-24 13:14:31 +0000 UTC" firstStartedPulling="2025-11-24 13:15:43.62677866 +0000 UTC m=+79.803837778" lastFinishedPulling="2025-11-24 13:15:48.589298105 +0000 UTC m=+84.766357235" observedRunningTime="2025-11-24 13:15:49.222911154 +0000 UTC m=+85.399970291" watchObservedRunningTime="2025-11-24 13:15:49.224213248 +0000 UTC m=+85.401272386"
	Nov 24 13:15:51 addons-715644 kubelet[1307]: I1124 13:15:51.814705    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qp5qp\" (UniqueName: \"kubernetes.io/projected/9962db4c-07c4-44a4-9f16-ae616a655918-kube-api-access-qp5qp\") pod \"busybox\" (UID: \"9962db4c-07c4-44a4-9f16-ae616a655918\") " pod="default/busybox"
	Nov 24 13:15:51 addons-715644 kubelet[1307]: I1124 13:15:51.814758    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/9962db4c-07c4-44a4-9f16-ae616a655918-gcp-creds\") pod \"busybox\" (UID: \"9962db4c-07c4-44a4-9f16-ae616a655918\") " pod="default/busybox"
	Nov 24 13:15:53 addons-715644 kubelet[1307]: I1124 13:15:53.317794    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.687771183 podStartE2EDuration="2.317773797s" podCreationTimestamp="2025-11-24 13:15:51 +0000 UTC" firstStartedPulling="2025-11-24 13:15:51.995508276 +0000 UTC m=+88.172567390" lastFinishedPulling="2025-11-24 13:15:52.625510889 +0000 UTC m=+88.802570004" observedRunningTime="2025-11-24 13:15:53.316720736 +0000 UTC m=+89.493779872" watchObservedRunningTime="2025-11-24 13:15:53.317773797 +0000 UTC m=+89.494832932"
	Nov 24 13:15:59 addons-715644 kubelet[1307]: E1124 13:15:59.949207    1307 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:59198->127.0.0.1:44107: write tcp 127.0.0.1:59198->127.0.0.1:44107: write: broken pipe
	Nov 24 13:16:00 addons-715644 kubelet[1307]: I1124 13:16:00.576179    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/2549f93d-2fe1-4944-b569-6a369c700eff-gcp-creds\") pod \"helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff\" (UID: \"2549f93d-2fe1-4944-b569-6a369c700eff\") " pod="local-path-storage/helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff"
	Nov 24 13:16:00 addons-715644 kubelet[1307]: I1124 13:16:00.576229    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/host-path/2549f93d-2fe1-4944-b569-6a369c700eff-data\") pod \"helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff\" (UID: \"2549f93d-2fe1-4944-b569-6a369c700eff\") " pod="local-path-storage/helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff"
	Nov 24 13:16:00 addons-715644 kubelet[1307]: I1124 13:16:00.576249    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"script\" (UniqueName: \"kubernetes.io/configmap/2549f93d-2fe1-4944-b569-6a369c700eff-script\") pod \"helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff\" (UID: \"2549f93d-2fe1-4944-b569-6a369c700eff\") " pod="local-path-storage/helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff"
	Nov 24 13:16:00 addons-715644 kubelet[1307]: I1124 13:16:00.576291    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln5v2\" (UniqueName: \"kubernetes.io/projected/2549f93d-2fe1-4944-b569-6a369c700eff-kube-api-access-ln5v2\") pod \"helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff\" (UID: \"2549f93d-2fe1-4944-b569-6a369c700eff\") " pod="local-path-storage/helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff"
	
	
	==> storage-provisioner [80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd] <==
	W1124 13:15:36.083804       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:38.086950       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:38.091128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:40.093912       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:40.097185       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:42.099468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:42.102592       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:44.105871       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:44.110996       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:46.114882       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:46.165928       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:48.169502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:48.173508       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:50.175997       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:50.180396       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:52.183178       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:52.186843       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:54.189159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:54.192630       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:56.194814       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:56.198091       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:58.201078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:15:58.206128       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:00.208555       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:16:00.212136       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-715644 -n addons-715644
helpers_test.go:269: (dbg) Run:  kubectl --context addons-715644 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: test-local-path gcp-auth-certs-create-czqhb gcp-auth-certs-patch-fwkkk ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4 registry-creds-764b6fb674-4tmmd helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-715644 describe pod test-local-path gcp-auth-certs-create-czqhb gcp-auth-certs-patch-fwkkk ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4 registry-creds-764b6fb674-4tmmd helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-715644 describe pod test-local-path gcp-auth-certs-create-czqhb gcp-auth-certs-patch-fwkkk ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4 registry-creds-764b6fb674-4tmmd helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff: exit status 1 (77.741836ms)

                                                
                                                
-- stdout --
	Name:             test-local-path
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           run=test-local-path
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Containers:
	  busybox:
	    Image:      busybox:stable
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sh
	      -c
	      echo 'local-path-provisioner' > /test/file1
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /test from data (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-f2lfv (ro)
	Volumes:
	  data:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  test-pvc
	    ReadOnly:   false
	  kube-api-access-f2lfv:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:            <none>

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-czqhb" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-fwkkk" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-gq29m" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ds6p4" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-4tmmd" not found
	Error from server (NotFound): pods "helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-715644 describe pod test-local-path gcp-auth-certs-create-czqhb gcp-auth-certs-patch-fwkkk ingress-nginx-admission-create-gq29m ingress-nginx-admission-patch-ds6p4 registry-creds-764b6fb674-4tmmd helper-pod-create-pvc-b07547c9-8acb-4c52-b115-c56befc42fff: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable headlamp --alsologtostderr -v=1: exit status 11 (244.257335ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:02.585930  362057 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:02.586201  362057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:02.586211  362057 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:02.586215  362057 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:02.586405  362057 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:02.586683  362057 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:02.587039  362057 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:02.587057  362057 addons.go:622] checking whether the cluster is paused
	I1124 13:16:02.587141  362057 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:02.587153  362057 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:02.587522  362057 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:02.605093  362057 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:02.605136  362057 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:02.622103  362057 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:02.721021  362057 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:02.721109  362057 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:02.749494  362057 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:02.749513  362057 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:02.749518  362057 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:02.749522  362057 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:02.749527  362057 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:02.749546  362057 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:02.749550  362057 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:02.749555  362057 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:02.749560  362057 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:02.749568  362057 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:02.749576  362057 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:02.749581  362057 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:02.749590  362057 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:02.749595  362057 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:02.749602  362057 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:02.749610  362057 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:02.749618  362057 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:02.749623  362057 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:02.749627  362057 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:02.749631  362057 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:02.749636  362057 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:02.749640  362057 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:02.749643  362057 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:02.749648  362057 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:02.749653  362057 cri.go:89] found id: ""
	I1124 13:16:02.749694  362057 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:02.762665  362057 out.go:203] 
	W1124 13:16:02.763665  362057 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:02Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:02.763680  362057 out.go:285] * 
	* 
	W1124 13:16:02.767579  362057 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:02.768850  362057 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.58s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-5bdddb765-dk2gw" [b66a1aa8-4178-4532-bdf2-5782018b45e0] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002761967s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (239.091437ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:20.056472  364337 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:20.056727  364337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:20.056736  364337 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:20.056740  364337 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:20.056909  364337 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:20.057137  364337 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:20.057456  364337 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:20.057472  364337 addons.go:622] checking whether the cluster is paused
	I1124 13:16:20.057552  364337 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:20.057564  364337 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:20.057908  364337 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:20.074371  364337 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:20.074438  364337 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:20.090257  364337 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:20.188901  364337 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:20.189002  364337 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:20.216850  364337 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:16:20.216871  364337 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:20.216875  364337 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:20.216879  364337 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:20.216882  364337 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:20.216885  364337 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:20.216907  364337 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:20.216912  364337 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:20.216917  364337 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:20.216945  364337 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:20.216953  364337 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:20.216957  364337 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:20.216959  364337 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:20.216962  364337 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:20.216965  364337 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:20.216969  364337 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:20.216975  364337 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:20.216979  364337 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:20.216982  364337 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:20.216985  364337 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:20.216995  364337 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:20.217002  364337 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:20.217007  364337 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:20.217012  364337 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:20.217020  364337 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:20.217024  364337 cri.go:89] found id: ""
	I1124 13:16:20.217063  364337 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:20.231025  364337 out.go:203] 
	W1124 13:16:20.232019  364337 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:20.232038  364337 out.go:285] * 
	* 
	W1124 13:16:20.236049  364337 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:20.237223  364337 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.25s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.12s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-715644 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-715644 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-715644 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [a49d5fba-d607-4c2b-9274-cda91681d5ac] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [a49d5fba-d607-4c2b-9274-cda91681d5ac] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [a49d5fba-d607-4c2b-9274-cda91681d5ac] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.002892299s
addons_test.go:967: (dbg) Run:  kubectl --context addons-715644 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 ssh "cat /opt/local-path-provisioner/pvc-b07547c9-8acb-4c52-b115-c56befc42fff_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-715644 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-715644 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (250.451719ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:09.117011  362717 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:09.117225  362717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:09.117233  362717 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:09.117237  362717 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:09.117446  362717 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:09.117705  362717 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:09.118003  362717 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:09.118024  362717 addons.go:622] checking whether the cluster is paused
	I1124 13:16:09.118112  362717 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:09.118126  362717 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:09.118495  362717 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:09.134980  362717 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:09.135038  362717 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:09.151553  362717 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:09.250064  362717 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:09.250155  362717 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:09.277238  362717 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:09.277261  362717 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:09.277265  362717 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:09.277268  362717 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:09.277271  362717 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:09.277275  362717 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:09.277278  362717 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:09.277281  362717 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:09.277284  362717 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:09.277293  362717 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:09.277299  362717 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:09.277302  362717 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:09.277305  362717 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:09.277308  362717 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:09.277311  362717 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:09.277322  362717 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:09.277330  362717 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:09.277334  362717 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:09.277337  362717 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:09.277339  362717 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:09.277342  362717 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:09.277345  362717 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:09.277348  362717 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:09.277351  362717 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:09.277353  362717 cri.go:89] found id: ""
	I1124 13:16:09.277403  362717 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:09.292208  362717 out.go:203] 
	W1124 13:16:09.293604  362717 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:09Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:09.293625  362717 out.go:285] * 
	* 
	W1124 13:16:09.300553  362717 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:09.301908  362717 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (9.12s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-h8tqs" [b4524963-77cc-46be-83b4-8a0f045e9846] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003214012s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (253.229785ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:06.259578  362324 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:06.259817  362324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:06.259827  362324 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:06.259831  362324 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:06.260068  362324 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:06.260327  362324 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:06.260625  362324 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:06.260640  362324 addons.go:622] checking whether the cluster is paused
	I1124 13:16:06.260717  362324 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:06.260729  362324 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:06.261113  362324 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:06.277623  362324 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:06.277678  362324 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:06.295626  362324 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:06.396256  362324 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:06.396329  362324 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:06.424556  362324 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:06.424579  362324 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:06.424584  362324 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:06.424589  362324 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:06.424594  362324 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:06.424599  362324 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:06.424603  362324 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:06.424608  362324 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:06.424612  362324 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:06.424630  362324 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:06.424639  362324 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:06.424644  362324 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:06.424652  362324 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:06.424657  362324 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:06.424665  362324 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:06.424676  362324 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:06.424687  362324 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:06.424694  362324 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:06.424697  362324 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:06.424701  362324 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:06.424713  362324 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:06.424721  362324 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:06.424733  362324 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:06.424741  362324 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:06.424747  362324 cri.go:89] found id: ""
	I1124 13:16:06.424795  362324 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:06.438521  362324 out.go:203] 
	W1124 13:16:06.439770  362324 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:06.439798  362324 out.go:285] * 
	* 
	W1124 13:16:06.444175  362324 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:06.445527  362324 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (6.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-jd5f7" [acd907f9-2c23-4bf8-94e6-51d6f6d46ef6] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.002955066s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable yakd --alsologtostderr -v=1: exit status 11 (239.518127ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:26.300302  364723 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:26.300605  364723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:26.300620  364723 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:26.300626  364723 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:26.300917  364723 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:26.301253  364723 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:26.301576  364723 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:26.301593  364723 addons.go:622] checking whether the cluster is paused
	I1124 13:16:26.301687  364723 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:26.301700  364723 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:26.302154  364723 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:26.318722  364723 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:26.318786  364723 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:26.335298  364723 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:26.434105  364723 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:26.434192  364723 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:26.461975  364723 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:16:26.462001  364723 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:26.462007  364723 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:26.462013  364723 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:26.462018  364723 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:26.462021  364723 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:26.462029  364723 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:26.462032  364723 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:26.462035  364723 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:26.462041  364723 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:26.462047  364723 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:26.462050  364723 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:26.462053  364723 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:26.462056  364723 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:26.462062  364723 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:26.462067  364723 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:26.462072  364723 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:26.462076  364723 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:26.462079  364723 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:26.462082  364723 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:26.462085  364723 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:26.462087  364723 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:26.462090  364723 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:26.462093  364723 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:26.462096  364723 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:26.462106  364723 cri.go:89] found id: ""
	I1124 13:16:26.462148  364723 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:26.475299  364723 out.go:203] 
	W1124 13:16:26.476395  364723 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:26Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:26Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:26.476410  364723 out.go:285] * 
	* 
	W1124 13:16:26.480288  364723 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:26.481519  364723 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-hxftx" [0a8e4e82-1ce0-4f98-9dd7-0239163661a3] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 6.003596615s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-715644 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-715644 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (240.218164ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:16:20.879068  364439 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:16:20.879333  364439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:20.879344  364439 out.go:374] Setting ErrFile to fd 2...
	I1124 13:16:20.879348  364439 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:16:20.879553  364439 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:16:20.879775  364439 mustload.go:66] Loading cluster: addons-715644
	I1124 13:16:20.880158  364439 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:20.880176  364439 addons.go:622] checking whether the cluster is paused
	I1124 13:16:20.880257  364439 config.go:182] Loaded profile config "addons-715644": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:16:20.880271  364439 host.go:66] Checking if "addons-715644" exists ...
	I1124 13:16:20.880613  364439 cli_runner.go:164] Run: docker container inspect addons-715644 --format={{.State.Status}}
	I1124 13:16:20.898079  364439 ssh_runner.go:195] Run: systemctl --version
	I1124 13:16:20.898120  364439 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-715644
	I1124 13:16:20.913700  364439 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33143 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/addons-715644/id_rsa Username:docker}
	I1124 13:16:21.012012  364439 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:16:21.012116  364439 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:16:21.041166  364439 cri.go:89] found id: "e235aa339c9ac5d961e51c8ba6ba912cc243b46b0ef6c59202698fe46121aefb"
	I1124 13:16:21.041186  364439 cri.go:89] found id: "32b77a8342024730ccea78bac96aafa904a8452871f7ad6ede2d73201b5297ae"
	I1124 13:16:21.041190  364439 cri.go:89] found id: "9946073c4dbc00e94343f77ffd10424e77a179847087d6518167e5967c01b6ac"
	I1124 13:16:21.041194  364439 cri.go:89] found id: "97f3de9ff4a387cd014af0563e9b3b98a067b536fde246c22a5e118e0732e718"
	I1124 13:16:21.041197  364439 cri.go:89] found id: "3492cc92692158b5e6044d1ea2c4d57bac8bda63d2ed59eab5592914657894e2"
	I1124 13:16:21.041200  364439 cri.go:89] found id: "db3d376ec41b7312ca6691a8743e9d9a78aa3d7d989fa62b5d4bed28d1352645"
	I1124 13:16:21.041203  364439 cri.go:89] found id: "a9fad48eebd55f9988e66d76b4c1ba8045a48e49c7d2ec247434bda584f848bd"
	I1124 13:16:21.041211  364439 cri.go:89] found id: "7f9d1b3fe4a9063805681f97d4f25ffb0176dbe2fc494b57f8b0ba808d906ab6"
	I1124 13:16:21.041215  364439 cri.go:89] found id: "de0cc746d3ed0efaaab995d7c9060f139d2048fdf2dc530c24968b27a493b199"
	I1124 13:16:21.041220  364439 cri.go:89] found id: "93fbc223db37d476f76feddf4b1b953455b15bfd5655c2e0f21618dcd9149be0"
	I1124 13:16:21.041223  364439 cri.go:89] found id: "9be932139aaef6d1a0813197ebd25803631d854bd08cc570ceb08ebb61e42533"
	I1124 13:16:21.041225  364439 cri.go:89] found id: "e91ca551d1e0e1b67d5a38e6388b9a94476991f0536bc399a4fc40157634ce1f"
	I1124 13:16:21.041228  364439 cri.go:89] found id: "edf878679786a9abe21c9897fa78bbc59bd532ce6f4ce69457f2e17deb93802a"
	I1124 13:16:21.041232  364439 cri.go:89] found id: "17cd086a0a85475fa6e37dbc6d551664d7ac78bb7fdc3540fb1bd1e175d77793"
	I1124 13:16:21.041244  364439 cri.go:89] found id: "33bdbf096e506d847514d785957b6ff08d7be79c8c2ce3cad269fc769d56f682"
	I1124 13:16:21.041259  364439 cri.go:89] found id: "bef94f1c94dd311ef47360262b10fc75702b47761e4bf690355c88cd5acbf47d"
	I1124 13:16:21.041271  364439 cri.go:89] found id: "83f5e4de5d19483eba28cce6cc0496cbd37a7f45e5dd8fdd549b5d2a0fe93004"
	I1124 13:16:21.041277  364439 cri.go:89] found id: "9fc9fbc51a1d5d85e698682518d6aabdc2c3030302e75bcb87adb6ae7d4fac0e"
	I1124 13:16:21.041281  364439 cri.go:89] found id: "80ca7185520801a449353432d8a29471e92f942c8e6b30f587a794abac0fb7dd"
	I1124 13:16:21.041289  364439 cri.go:89] found id: "3c0239d349ace6e30dffd2560683ba8f02197dfb6eb490d1097a535ae3d5599f"
	I1124 13:16:21.041294  364439 cri.go:89] found id: "1cd2d69a4521db2c270e5a2192b5d29f185e8986efeacc56186cd5c8a32fba30"
	I1124 13:16:21.041302  364439 cri.go:89] found id: "f906d790e557cecfdacb1936cb0ed8443cc0bc9466c826f9d800db6bf44bf47e"
	I1124 13:16:21.041306  364439 cri.go:89] found id: "8bd061f25cd271e0f1c7d640c968152672462e55b0bd0013dd192360bd8041bf"
	I1124 13:16:21.041313  364439 cri.go:89] found id: "73d6f909ae2dca0d1fb7c89dd3fa82bdb9b4d2d1c56e66703aa1b07a967e3cc6"
	I1124 13:16:21.041318  364439 cri.go:89] found id: "e080d87ce42a145608e63f7c6b4c14b99b3014112ba7d536610206377da1bcb5"
	I1124 13:16:21.041322  364439 cri.go:89] found id: ""
	I1124 13:16:21.041356  364439 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:16:21.054758  364439 out.go:203] 
	W1124 13:16:21.055648  364439 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:21Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:16:21Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:16:21.055682  364439 out.go:285] * 
	* 
	W1124 13:16:21.059590  364439 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:16:21.060800  364439 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-715644 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (6.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-334592 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-334592 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-wzzff" [a0a4aa7d-69b3-4093-b390-93a9f8313bc9] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-334592 -n functional-334592
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-24 13:31:54.586767975 +0000 UTC m=+1101.849673956
functional_test.go:1645: (dbg) Run:  kubectl --context functional-334592 describe po hello-node-connect-7d85dfc575-wzzff -n default
functional_test.go:1645: (dbg) kubectl --context functional-334592 describe po hello-node-connect-7d85dfc575-wzzff -n default:
Name:             hello-node-connect-7d85dfc575-wzzff
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-334592/192.168.49.2
Start Time:       Mon, 24 Nov 2025 13:21:54 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txlmz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-txlmz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wzzff to functional-334592
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-334592 logs hello-node-connect-7d85dfc575-wzzff -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-334592 logs hello-node-connect-7d85dfc575-wzzff -n default: exit status 1 (57.974965ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wzzff" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-334592 logs hello-node-connect-7d85dfc575-wzzff -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-334592 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-wzzff
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-334592/192.168.49.2
Start Time:       Mon, 24 Nov 2025 13:21:54 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.8
IPs:
IP:           10.244.0.8
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txlmz (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-txlmz:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wzzff to functional-334592
Normal   Pulling    7m5s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m5s (x5 over 10m)      kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m5s (x5 over 10m)      kubelet            Error: ErrImagePull
Warning  Failed     4m59s (x20 over 9m59s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m46s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-334592 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-334592 logs -l app=hello-node-connect: exit status 1 (58.732381ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-wzzff" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-334592 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-334592 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.99.121.54
IPs:                      10.99.121.54
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  32670/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-334592
helpers_test.go:243: (dbg) docker inspect functional-334592:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a",
	        "Created": "2025-11-24T13:20:02.305868679Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 375453,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:20:02.336078019Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a/hostname",
	        "HostsPath": "/var/lib/docker/containers/a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a/hosts",
	        "LogPath": "/var/lib/docker/containers/a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a/a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a-json.log",
	        "Name": "/functional-334592",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-334592:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-334592",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "a76716c85bc7452cfecf92333e101b5a35f5f25eaff77512f66df7c077d5808a",
	                "LowerDir": "/var/lib/docker/overlay2/24ece6a0447b536d37b3dc331bad47eca67055b1ab9fdfa228a6618694666970-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/24ece6a0447b536d37b3dc331bad47eca67055b1ab9fdfa228a6618694666970/merged",
	                "UpperDir": "/var/lib/docker/overlay2/24ece6a0447b536d37b3dc331bad47eca67055b1ab9fdfa228a6618694666970/diff",
	                "WorkDir": "/var/lib/docker/overlay2/24ece6a0447b536d37b3dc331bad47eca67055b1ab9fdfa228a6618694666970/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-334592",
	                "Source": "/var/lib/docker/volumes/functional-334592/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-334592",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-334592",
	                "name.minikube.sigs.k8s.io": "functional-334592",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6a897534e571462894f4f9cc32c954070547cd37956bdfe38e660ab4f752e730",
	            "SandboxKey": "/var/run/docker/netns/6a897534e571",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33153"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33154"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33157"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33155"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33156"
	                    }
	                ]
	            },
	            "Networks": {
	                "functional-334592": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "a942b60b9872fdf0637233831f62bff234c0345db0cc27a8da060e325e6af854",
	                    "EndpointID": "651d8c8ea73bc9deb26924f0def44da635083843125d92e9298b8a7b75367c7d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "MacAddress": "1a:09:67:fb:01:53",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-334592",
	                        "a76716c85bc7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-334592 -n functional-334592
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-334592 logs -n 25: (1.186014506s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                        ARGS                                                        │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-334592 ssh -- ls -la /mount-9p                                                                          │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ ssh            │ functional-334592 ssh sudo umount -f /mount-9p                                                                     │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ ssh            │ functional-334592 ssh findmnt -T /mount1                                                                           │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ mount          │ -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount2 --alsologtostderr -v=1 │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ mount          │ -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount3 --alsologtostderr -v=1 │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ mount          │ -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount1 --alsologtostderr -v=1 │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ ssh            │ functional-334592 ssh findmnt -T /mount1                                                                           │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ ssh            │ functional-334592 ssh findmnt -T /mount2                                                                           │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ ssh            │ functional-334592 ssh findmnt -T /mount3                                                                           │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ mount          │ -p functional-334592 --kill=true                                                                                   │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ update-context │ functional-334592 update-context --alsologtostderr -v=2                                                            │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ update-context │ functional-334592 update-context --alsologtostderr -v=2                                                            │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ update-context │ functional-334592 update-context --alsologtostderr -v=2                                                            │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ image          │ functional-334592 image ls --format short --alsologtostderr                                                        │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ image          │ functional-334592 image ls --format yaml --alsologtostderr                                                         │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ image          │ functional-334592 image ls --format json --alsologtostderr                                                         │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ image          │ functional-334592 image ls --format table --alsologtostderr                                                        │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ ssh            │ functional-334592 ssh pgrep buildkitd                                                                              │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │                     │
	│ image          │ functional-334592 image build -t localhost/my-image:functional-334592 testdata/build --alsologtostderr             │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ image          │ functional-334592 image ls                                                                                         │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:22 UTC │ 24 Nov 25 13:22 UTC │
	│ service        │ functional-334592 service list                                                                                     │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:31 UTC │ 24 Nov 25 13:31 UTC │
	│ service        │ functional-334592 service list -o json                                                                             │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:31 UTC │ 24 Nov 25 13:31 UTC │
	│ service        │ functional-334592 service --namespace=default --https --url hello-node                                             │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:31 UTC │                     │
	│ service        │ functional-334592 service hello-node --url --format={{.IP}}                                                        │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:31 UTC │                     │
	│ service        │ functional-334592 service hello-node --url                                                                         │ functional-334592 │ jenkins │ v1.37.0 │ 24 Nov 25 13:31 UTC │                     │
	└────────────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:22:08
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:22:08.839376  389165 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:22:08.839652  389165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:08.839663  389165 out.go:374] Setting ErrFile to fd 2...
	I1124 13:22:08.839669  389165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:08.840012  389165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:22:08.840458  389165 out.go:368] Setting JSON to false
	I1124 13:22:08.841503  389165 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7476,"bootTime":1763983053,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:22:08.841566  389165 start.go:143] virtualization: kvm guest
	I1124 13:22:08.843884  389165 out.go:179] * [functional-334592] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 13:22:08.845228  389165 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:22:08.845243  389165 notify.go:221] Checking for updates...
	I1124 13:22:08.847338  389165 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:22:08.848441  389165 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:22:08.849485  389165 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:22:08.854092  389165 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:22:08.855386  389165 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:22:08.856730  389165 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:22:08.857310  389165 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:22:08.880640  389165 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:22:08.880782  389165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:22:08.936388  389165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 13:22:08.927594092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:22:08.936504  389165 docker.go:319] overlay module found
	I1124 13:22:08.938143  389165 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 13:22:08.939144  389165 start.go:309] selected driver: docker
	I1124 13:22:08.939162  389165 start.go:927] validating driver "docker" against &{Name:functional-334592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-334592 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:22:08.939271  389165 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:22:08.940961  389165 out.go:203] 
	W1124 13:22:08.942055  389165 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 13:22:08.943189  389165 out.go:203] 
	
	
	==> CRI-O <==
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.021212605Z" level=info msg="Removed pod sandbox: 4033b7370f0ed8aa1d8feed07f8a5da15594168c0efffb925fff49d6f8a87f4b" id=02034f5d-5c30-4586-aeb2-7313420f82cb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.021586793Z" level=info msg="Stopping pod sandbox: feaad3c9313bd887888d73961f8978f2705103303f1bb6275896664d4f810305" id=26ab5fb3-66b5-485e-be76-700087ded9cc name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.021626812Z" level=info msg="Stopped pod sandbox (already stopped): feaad3c9313bd887888d73961f8978f2705103303f1bb6275896664d4f810305" id=26ab5fb3-66b5-485e-be76-700087ded9cc name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.021987049Z" level=info msg="Removing pod sandbox: feaad3c9313bd887888d73961f8978f2705103303f1bb6275896664d4f810305" id=2c418b1c-4eab-4667-af92-ba00a4af937d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.024736743Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.02478302Z" level=info msg="Removed pod sandbox: feaad3c9313bd887888d73961f8978f2705103303f1bb6275896664d4f810305" id=2c418b1c-4eab-4667-af92-ba00a4af937d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.905003818Z" level=info msg="Pulled image: docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a" id=148cae8e-0739-4d27-8446-9c2f234d0439 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.905585877Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=0242f7d6-c8ab-47f8-acb6-ad6a6042ee57 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.907146114Z" level=info msg="Checking image status: docker.io/kubernetesui/metrics-scraper:v1.0.8@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c" id=162558fe-adb9-4026-b354-54dcbb4388a4 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.910903385Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wz9kj/dashboard-metrics-scraper" id=05c0ab0e-0cf0-4f52-95e4-21ce3aeabcd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.911034737Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.915217656Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.915376479Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b8d4257151fb1e3e7072862f1cf4ae137f512f67c7bc4c43555b76c3e0f8333a/merged/etc/group: no such file or directory"
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.915658042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.951997812Z" level=info msg="Created container 563e18ee70147ed25191dd3392733aa6942150638782b2fddf5757f79a1e76b6: kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wz9kj/dashboard-metrics-scraper" id=05c0ab0e-0cf0-4f52-95e4-21ce3aeabcd3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.952668224Z" level=info msg="Starting container: 563e18ee70147ed25191dd3392733aa6942150638782b2fddf5757f79a1e76b6" id=32d9bc7a-7b73-4624-864c-4b864e30cef7 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:22:14 functional-334592 crio[3575]: time="2025-11-24T13:22:14.954470062Z" level=info msg="Started container" PID=7627 containerID=563e18ee70147ed25191dd3392733aa6942150638782b2fddf5757f79a1e76b6 description=kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c-wz9kj/dashboard-metrics-scraper id=32d9bc7a-7b73-4624-864c-4b864e30cef7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=e07973aec1344491566e5086f9b088b46b76aa087d4c3f3ff27a569d91557845
	Nov 24 13:22:20 functional-334592 crio[3575]: time="2025-11-24T13:22:20.026991757Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e6d8aae7-5437-44c4-ae20-4dca99613fb2 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:22:36 functional-334592 crio[3575]: time="2025-11-24T13:22:36.026761563Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=08d1411a-ec0d-4f8c-97cb-abee516dc703 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:23:01 functional-334592 crio[3575]: time="2025-11-24T13:23:01.027418207Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=3af5bcbd-ea8a-4d0f-8598-e10e0059a213 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:23:20 functional-334592 crio[3575]: time="2025-11-24T13:23:20.026645825Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=23177d34-db9a-42c3-9c4b-30150d4cc05a name=/runtime.v1.ImageService/PullImage
	Nov 24 13:24:34 functional-334592 crio[3575]: time="2025-11-24T13:24:34.028134489Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=cb99818b-581b-4d3e-a21f-84b16834272e name=/runtime.v1.ImageService/PullImage
	Nov 24 13:24:49 functional-334592 crio[3575]: time="2025-11-24T13:24:49.02751164Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=8e6b7efe-c2a6-4db4-a648-8f8cfa6bc97b name=/runtime.v1.ImageService/PullImage
	Nov 24 13:27:20 functional-334592 crio[3575]: time="2025-11-24T13:27:20.027697927Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=bb60f359-3dca-467d-93d6-d41c43a87620 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:27:35 functional-334592 crio[3575]: time="2025-11-24T13:27:35.027332798Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=5ceff8ba-1b0d-483c-83f5-6badb6c35abf name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	563e18ee70147       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   e07973aec1344       dashboard-metrics-scraper-77bf4d6c4c-wz9kj   kubernetes-dashboard
	b5f1057a2c466       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   f3bf25a71741b       kubernetes-dashboard-855c9754f9-mp8qt        kubernetes-dashboard
	4640b2ee49701       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   45062fb4fe615       busybox-mount                                default
	ca79921e28d45       docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541                  9 minutes ago       Running             myfrontend                  0                   f4a865a67e163       sp-pod                                       default
	2e701ed97f0d4       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   ff66b2e68c9ff       nginx-svc                                    default
	9f4b597c69de3       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  10 minutes ago      Running             mysql                       0                   467aca6e9f88d       mysql-5bb876957f-c2kmw                       default
	d29ab8438c52f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   ba5eac62acbb1       kube-apiserver-functional-334592             kube-system
	8303ea3432d5d       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     1                   33420d5515f4c       kube-controller-manager-functional-334592    kube-system
	0e8d440a6041f       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   0d886e45b84f7       etcd-functional-334592                       kube-system
	e02849eb8e8e8       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Running             kube-scheduler              1                   5b01a62d57f99       kube-scheduler-functional-334592             kube-system
	6ffefd268cf7a       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Running             kube-proxy                  1                   69d2b1dcc2017       kube-proxy-8v9mc                             kube-system
	6c6822cd5a836       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Running             kindnet-cni                 1                   4568efe0f44c7       kindnet-w9848                                kube-system
	2eaab55386a51       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Running             coredns                     1                   81c58b2378a38       coredns-66bc5c9577-4gwwn                     kube-system
	a7ef56c1b851e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Running             storage-provisioner         1                   446afb5caaf6a       storage-provisioner                          kube-system
	283e2b5223920       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   81c58b2378a38       coredns-66bc5c9577-4gwwn                     kube-system
	cf1d737b0f901       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   446afb5caaf6a       storage-provisioner                          kube-system
	020be1dd652da       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   4568efe0f44c7       kindnet-w9848                                kube-system
	979d72b5f8786       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   69d2b1dcc2017       kube-proxy-8v9mc                             kube-system
	5f5c25acfa706       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   5b01a62d57f99       kube-scheduler-functional-334592             kube-system
	a8e6a426a355c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   0d886e45b84f7       etcd-functional-334592                       kube-system
	16321f921ca9b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 11 minutes ago      Exited              kube-controller-manager     0                   33420d5515f4c       kube-controller-manager-functional-334592    kube-system
	
	
	==> coredns [283e2b52239206d6a364a2166e132352b23f65fa816d0ad8e63e66003fc375df] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49607 - 65301 "HINFO IN 4697336642852633912.7644577209353375739. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.472017128s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [2eaab55386a517cf0294ab69b29327fcfa11d4e38eeb64d170190e9db5642410] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:33116 - 3099 "HINFO IN 8251950075774873682.1945300507531967366. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.46539777s
	
	
	==> describe nodes <==
	Name:               functional-334592
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-334592
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=functional-334592
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_20_14_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:20:11 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-334592
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:31:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:30:47 +0000   Mon, 24 Nov 2025 13:20:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:30:47 +0000   Mon, 24 Nov 2025 13:20:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:30:47 +0000   Mon, 24 Nov 2025 13:20:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:30:47 +0000   Mon, 24 Nov 2025 13:20:30 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-334592
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                89b0d051-6e9f-4b1d-bfa8-ec61937ec7f8
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-5p4sk                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     hello-node-connect-7d85dfc575-wzzff           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-c2kmw                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     10m
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m55s
	  kube-system                 coredns-66bc5c9577-4gwwn                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-334592                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-w9848                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-334592              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-334592     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-8v9mc                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-334592              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-wz9kj    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-mp8qt         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-334592 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-334592 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-334592 status is now: NodeHasSufficientPID
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-334592 event: Registered Node functional-334592 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-334592 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-334592 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-334592 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-334592 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-334592 event: Registered Node functional-334592 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [0e8d440a6041f9cb3ba49f149e125d62032f4244ea14745adea24844317c5919] <==
	{"level":"warn","ts":"2025-11-24T13:21:15.207986Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.213841Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40866","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.219546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.225611Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40892","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.231613Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.244953Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.252694Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40944","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.258644Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.265435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.271545Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.284013Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.289997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41056","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.296216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.314070Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41090","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.317100Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41102","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.322883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41122","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.328690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41146","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:21:15.374698Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41182","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:21:45.762874Z","caller":"traceutil/trace.go:172","msg":"trace[1466099406] linearizableReadLoop","detail":"{readStateIndex:697; appliedIndex:697; }","duration":"104.140336ms","start":"2025-11-24T13:21:45.658710Z","end":"2025-11-24T13:21:45.762851Z","steps":["trace[1466099406] 'read index received'  (duration: 104.135128ms)","trace[1466099406] 'applied index is now lower than readState.Index'  (duration: 4.466µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:21:45.763024Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"104.276055ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:21:45.763064Z","caller":"traceutil/trace.go:172","msg":"trace[386041520] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:640; }","duration":"104.353467ms","start":"2025-11-24T13:21:45.658700Z","end":"2025-11-24T13:21:45.763053Z","steps":["trace[386041520] 'agreement among raft nodes before linearized reading'  (duration: 104.247765ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:21:45.763092Z","caller":"traceutil/trace.go:172","msg":"trace[1087509268] transaction","detail":"{read_only:false; response_revision:641; number_of_response:1; }","duration":"131.138836ms","start":"2025-11-24T13:21:45.631937Z","end":"2025-11-24T13:21:45.763076Z","steps":["trace[1087509268] 'process raft request'  (duration: 130.989033ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:31:14.922047Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1136}
	{"level":"info","ts":"2025-11-24T13:31:14.941854Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1136,"took":"19.474829ms","hash":1771739340,"current-db-size-bytes":3526656,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1593344,"current-db-size-in-use":"1.6 MB"}
	{"level":"info","ts":"2025-11-24T13:31:14.941909Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":1771739340,"revision":1136,"compact-revision":-1}
	
	
	==> etcd [a8e6a426a355c04f82b6332bdad999e6eea1a9d8f65a457cdd9d241cb4d590da] <==
	{"level":"warn","ts":"2025-11-24T13:20:10.756917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:20:10.763961Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:20:10.769713Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46126","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:20:10.782452Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46136","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:20:10.789673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:20:10.796449Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46162","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:20:10.845341Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:46188","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:20:54.985827Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-24T13:20:54.985920Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-334592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-24T13:20:54.986019Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T13:21:01.987514Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-24T13:21:01.987611Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T13:21:01.987659Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-24T13:21:01.987720Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-24T13:21:01.987684Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T13:21:01.987779Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-24T13:21:01.987822Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T13:21:01.987835Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"warn","ts":"2025-11-24T13:21:01.987791Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-24T13:21:01.987856Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T13:21:01.987747Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"info","ts":"2025-11-24T13:21:01.990092Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-24T13:21:01.990155Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-24T13:21:01.990187Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-24T13:21:01.990199Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-334592","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 13:31:56 up  2:14,  0 user,  load average: 0.43, 0.24, 0.57
	Linux functional-334592 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [020be1dd652da07c761a004eb8a61f349e1c3751d0d477f9079543e6356f8ffa] <==
	I1124 13:20:19.782321       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:20:19.782567       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1124 13:20:19.782691       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:20:19.782706       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:20:19.782721       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:20:19Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:20:19.984751       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:20:19.984783       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:20:19.984797       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:20:19.984972       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:20:20.423288       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:20:20.423311       1 metrics.go:72] Registering metrics
	I1124 13:20:20.423384       1 controller.go:711] "Syncing nftables rules"
	I1124 13:20:29.985786       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:20:29.985828       1 main.go:301] handling current node
	I1124 13:20:39.985687       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:20:39.985714       1 main.go:301] handling current node
	I1124 13:20:49.985079       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:20:49.985124       1 main.go:301] handling current node
	
	
	==> kindnet [6c6822cd5a836d63eeabe775eda1cc31074481e3d5c9bd88ab378ec22f3ba2fd] <==
	I1124 13:29:46.144543       1 main.go:301] handling current node
	I1124 13:29:56.146528       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:29:56.146561       1 main.go:301] handling current node
	I1124 13:30:06.143771       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:30:06.143817       1 main.go:301] handling current node
	I1124 13:30:16.144703       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:30:16.144736       1 main.go:301] handling current node
	I1124 13:30:26.141137       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:30:26.141183       1 main.go:301] handling current node
	I1124 13:30:36.145227       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:30:36.145268       1 main.go:301] handling current node
	I1124 13:30:46.144182       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:30:46.144212       1 main.go:301] handling current node
	I1124 13:30:56.149494       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:30:56.149532       1 main.go:301] handling current node
	I1124 13:31:06.143756       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:31:06.143784       1 main.go:301] handling current node
	I1124 13:31:16.142797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:31:16.142837       1 main.go:301] handling current node
	I1124 13:31:26.148527       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:31:26.148558       1 main.go:301] handling current node
	I1124 13:31:36.143397       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:31:36.143426       1 main.go:301] handling current node
	I1124 13:31:46.149994       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1124 13:31:46.150020       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d29ab8438c52fdd49bbf568ef1d5e1a5656a8ce04a4cb55c8b4abf24185d85e2] <==
	I1124 13:21:15.859828       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:21:16.088819       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:21:16.734517       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1124 13:21:16.938986       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1124 13:21:16.940014       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:21:16.943583       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:21:17.360833       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:21:17.445032       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:21:17.483184       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:21:17.487743       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:21:34.685463       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.106.73.3"}
	I1124 13:21:39.748340       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.12.112"}
	I1124 13:21:39.785844       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:21:40.056244       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.99.102.221"}
	I1124 13:21:41.817272       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.99.62.181"}
	E1124 13:21:52.876958       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35278: use of closed network connection
	E1124 13:21:53.825053       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35296: use of closed network connection
	I1124 13:21:54.266779       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.99.121.54"}
	E1124 13:21:55.656379       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:35326: use of closed network connection
	E1124 13:21:59.210494       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:44854: use of closed network connection
	E1124 13:22:08.395879       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:45412: use of closed network connection
	I1124 13:22:09.819018       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:22:09.941337       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.103.59.12"}
	I1124 13:22:09.955221       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.101.233.184"}
	I1124 13:31:15.778882       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [16321f921ca9b0da5a6a54d70cac378a8139225095a87848564ee381f8321da7] <==
	I1124 13:20:18.228769       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:20:18.228803       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:20:18.228778       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:20:18.228861       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:20:18.228925       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:20:18.228964       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-334592"
	I1124 13:20:18.229004       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:20:18.228967       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:20:18.229056       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 13:20:18.229056       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 13:20:18.229115       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:20:18.229241       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:20:18.230139       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:20:18.230161       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:20:18.232810       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:20:18.232835       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:20:18.232880       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:20:18.233842       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:20:18.233864       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:20:18.235048       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:20:18.235048       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:20:18.237225       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 13:20:18.242621       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:20:18.247055       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:20:33.230881       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [8303ea3432d5d6e35fbdd1c5e5cffbb506d8b051ccd22835add4811a4ffbea12] <==
	I1124 13:21:19.198239       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:21:19.198269       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 13:21:19.198285       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:21:19.198347       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:21:19.198425       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:21:19.198488       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:21:19.198514       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-334592"
	I1124 13:21:19.198582       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 13:21:19.198673       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:21:19.198781       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:21:19.199097       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:21:19.199122       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:21:19.201162       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:21:19.202704       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:21:19.203905       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:21:19.205164       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:21:19.207821       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 13:21:19.221189       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1124 13:22:09.887353       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 13:22:09.892449       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 13:22:09.894477       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 13:22:09.896738       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 13:22:09.898789       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 13:22:09.908007       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1124 13:22:09.909637       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [6ffefd268cf7abbe141448fb6c64f8d02fe4778e782d1a5f9c03c05838335e57] <==
	I1124 13:20:55.803965       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:20:55.862777       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:20:55.962946       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:20:55.962979       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:20:55.963083       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:20:55.981360       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:20:55.981409       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:20:55.986754       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:20:55.987138       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:20:55.987173       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:20:55.988513       1 config.go:200] "Starting service config controller"
	I1124 13:20:55.988534       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:20:55.988556       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:20:55.988559       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:20:55.988575       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:20:55.988538       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:20:55.988624       1 config.go:309] "Starting node config controller"
	I1124 13:20:55.988634       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:20:55.988640       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:20:56.088807       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:20:56.088837       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:20:56.088929       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [979d72b5f8786b1d4c670b79122f5f8e3bc528fe92e8621617b91cdf9ace9f3b] <==
	I1124 13:20:19.640667       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:20:19.709604       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:20:19.809914       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:20:19.809959       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1124 13:20:19.810058       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:20:19.828072       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:20:19.828126       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:20:19.833067       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:20:19.833393       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:20:19.833429       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:20:19.834634       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:20:19.834657       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:20:19.834670       1 config.go:200] "Starting service config controller"
	I1124 13:20:19.834676       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:20:19.834692       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:20:19.834697       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:20:19.834718       1 config.go:309] "Starting node config controller"
	I1124 13:20:19.834725       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:20:19.834731       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:20:19.934723       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:20:19.934754       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:20:19.934842       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [5f5c25acfa706b687d406cdafccfe728eb8f7b04f28f75b505513cb8fdcc2e8a] <==
	E1124 13:20:12.000570       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:20:12.000867       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:20:12.000870       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:20:12.000988       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:20:12.001065       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:20:12.001144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:20:12.001267       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:20:12.001314       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:20:12.001434       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:20:12.001492       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:20:12.001519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:20:12.001573       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:20:12.002078       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:20:12.002133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:20:12.002193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:20:12.002217       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:20:12.002426       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:20:12.003219       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1124 13:20:13.499747       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:20:54.874020       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:20:54.874069       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1124 13:20:54.874095       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1124 13:20:54.874054       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1124 13:20:54.874180       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1124 13:20:54.874211       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [e02849eb8e8e89d29719bcc21327c915752f193bc5367fc1f9d7515e5029f942] <==
	I1124 13:21:03.949884       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 13:21:03.949902       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:21:03.949921       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:21:03.949921       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 13:21:03.949921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:21:03.950331       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:21:03.951183       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:21:03.951370       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 13:21:04.050685       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 13:21:04.050711       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 13:21:04.050745       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E1124 13:21:15.771372       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:21:15.774415       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:21:15.774540       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:21:15.781341       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:21:15.781521       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:21:15.781546       1 reflector.go:205] "Failed to watch" err="resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:21:15.781562       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:21:15.781586       1 reflector.go:205] "Failed to watch" err="nodes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:21:15.781600       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:21:15.781617       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:21:15.781633       1 reflector.go:205] "Failed to watch" err="deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:21:15.781648       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:21:15.781663       1 reflector.go:205] "Failed to watch" err="csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:21:15.781708       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	
	
	==> kubelet <==
	Nov 24 13:29:17 functional-334592 kubelet[4292]: E1124 13:29:17.026531    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:29:18 functional-334592 kubelet[4292]: E1124 13:29:18.026862    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:29:30 functional-334592 kubelet[4292]: E1124 13:29:30.026697    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:29:30 functional-334592 kubelet[4292]: E1124 13:29:30.026840    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:29:41 functional-334592 kubelet[4292]: E1124 13:29:41.027154    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:29:45 functional-334592 kubelet[4292]: E1124 13:29:45.026432    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:29:53 functional-334592 kubelet[4292]: E1124 13:29:53.027223    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:29:59 functional-334592 kubelet[4292]: E1124 13:29:59.027163    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:30:04 functional-334592 kubelet[4292]: E1124 13:30:04.026956    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:30:11 functional-334592 kubelet[4292]: E1124 13:30:11.026755    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:30:19 functional-334592 kubelet[4292]: E1124 13:30:19.027015    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:30:26 functional-334592 kubelet[4292]: E1124 13:30:26.026822    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:30:32 functional-334592 kubelet[4292]: E1124 13:30:32.028017    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:30:41 functional-334592 kubelet[4292]: E1124 13:30:41.027009    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:30:46 functional-334592 kubelet[4292]: E1124 13:30:46.026504    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:30:56 functional-334592 kubelet[4292]: E1124 13:30:56.026884    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:30:59 functional-334592 kubelet[4292]: E1124 13:30:59.026383    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:31:09 functional-334592 kubelet[4292]: E1124 13:31:09.026426    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:31:11 functional-334592 kubelet[4292]: E1124 13:31:11.026882    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:31:20 functional-334592 kubelet[4292]: E1124 13:31:20.027213    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:31:25 functional-334592 kubelet[4292]: E1124 13:31:25.026551    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:31:31 functional-334592 kubelet[4292]: E1124 13:31:31.026502    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:31:38 functional-334592 kubelet[4292]: E1124 13:31:38.027275    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	Nov 24 13:31:46 functional-334592 kubelet[4292]: E1124 13:31:46.027504    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-5p4sk" podUID="27942986-3b8e-445b-805f-1e176b63c75a"
	Nov 24 13:31:50 functional-334592 kubelet[4292]: E1124 13:31:50.026614    4292 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-wzzff" podUID="a0a4aa7d-69b3-4093-b390-93a9f8313bc9"
	
	
	==> kubernetes-dashboard [b5f1057a2c46658d6fa2ec320b49a56dbab6327d025eec88dd8044be2f3e55ee] <==
	2025/11/24 13:22:13 Starting overwatch
	2025/11/24 13:22:13 Using namespace: kubernetes-dashboard
	2025/11/24 13:22:13 Using in-cluster config to connect to apiserver
	2025/11/24 13:22:13 Using secret token for csrf signing
	2025/11/24 13:22:13 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 13:22:13 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 13:22:13 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 13:22:13 Generating JWE encryption key
	2025/11/24 13:22:13 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 13:22:13 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 13:22:13 Initializing JWE encryption key from synchronized object
	2025/11/24 13:22:13 Creating in-cluster Sidecar client
	2025/11/24 13:22:13 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 13:22:13 Serving insecurely on HTTP port: 9090
	2025/11/24 13:22:43 Successful request to sidecar
	
	
	==> storage-provisioner [a7ef56c1b851e4c384f53543a2e6eafa58287c21f91fdcf43357c6e047e050c0] <==
	W1124 13:31:31.715320       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:33.718456       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:33.721989       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:35.724367       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:35.728674       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:37.731446       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:37.735234       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:39.737920       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:39.741240       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:41.743344       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:41.746839       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:43.749258       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:43.752816       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:45.756078       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:45.763077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:47.765647       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:47.770366       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:49.772830       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:49.776279       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:51.778838       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:51.783593       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:53.785965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:53.789553       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:55.793062       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:31:55.796756       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [cf1d737b0f9017938e383fc13a45332f015e900d5e3a20c00e90e8b7fc7d2a93] <==
	W1124 13:20:30.718110       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:30.721343       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:20:30.816097       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-334592_5923af0e-3a0a-4e41-a16a-a8734229e759!
	W1124 13:20:32.724262       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:32.728441       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:34.731162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:34.734995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:36.737738       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:36.742470       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:38.745127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:38.748666       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:40.751638       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:40.755249       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:42.757635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:42.760943       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:44.763502       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:44.767034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:46.769802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:46.773512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:48.776722       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:48.784017       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:50.786954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:50.790765       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:52.793174       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:20:52.796452       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-334592 -n functional-334592
helpers_test.go:269: (dbg) Run:  kubectl --context functional-334592 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-5p4sk hello-node-connect-7d85dfc575-wzzff
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-334592 describe pod busybox-mount hello-node-75c85bcc94-5p4sk hello-node-connect-7d85dfc575-wzzff
helpers_test.go:290: (dbg) kubectl --context functional-334592 describe pod busybox-mount hello-node-75c85bcc94-5p4sk hello-node-connect-7d85dfc575-wzzff:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-334592/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 13:22:05 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.10
	IPs:
	  IP:  10.244.0.10
	Containers:
	  mount-munger:
	    Container ID:  cri-o://4640b2ee497016b7d32a173fa17f98323a8318f9f08fda23d1c1966d5c7b9b3b
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 24 Nov 2025 13:22:06 +0000
	      Finished:     Mon, 24 Nov 2025 13:22:06 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ck9rf (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-ck9rf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-334592
	  Normal  Pulling    9m51s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 700ms (700ms including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Created container: mount-munger
	  Normal  Started    9m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-5p4sk
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-334592/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 13:21:40 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z957j (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-z957j:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-75c85bcc94-5p4sk to functional-334592
	  Normal   Pulling    7m22s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m22s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    10s (x42 over 10m)   kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     10s (x42 over 10m)   kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-wzzff
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-334592/192.168.49.2
	Start Time:       Mon, 24 Nov 2025 13:21:54 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.8
	IPs:
	  IP:           10.244.0.8
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-txlmz (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-txlmz:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-wzzff to functional-334592
	  Normal   Pulling    7m7s (x5 over 10m)    kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     7m7s (x5 over 10m)    kubelet            Error: ErrImagePull
	  Warning  Failed     5m1s (x20 over 10m)   kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.72s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-334592 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-334592 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-5p4sk" [27942986-3b8e-445b-805f-1e176b63c75a] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-334592 -n functional-334592
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-24 13:31:40.382461599 +0000 UTC m=+1087.645367592
functional_test.go:1460: (dbg) Run:  kubectl --context functional-334592 describe po hello-node-75c85bcc94-5p4sk -n default
functional_test.go:1460: (dbg) kubectl --context functional-334592 describe po hello-node-75c85bcc94-5p4sk -n default:
Name:             hello-node-75c85bcc94-5p4sk
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-334592/192.168.49.2
Start Time:       Mon, 24 Nov 2025 13:21:40 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z957j (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-z957j:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                     From               Message
----     ------     ----                    ----               -------
Normal   Scheduled  10m                     default-scheduler  Successfully assigned default/hello-node-75c85bcc94-5p4sk to functional-334592
Normal   Pulling    7m6s (x5 over 10m)      kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     7m6s (x5 over 9m55s)    kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     7m6s (x5 over 9m55s)    kubelet            Error: ErrImagePull
Warning  Failed     4m54s (x20 over 9m54s)  kubelet            Error: ImagePullBackOff
Normal   BackOff    4m43s (x21 over 9m54s)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1460: (dbg) Run:  kubectl --context functional-334592 logs hello-node-75c85bcc94-5p4sk -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-334592 logs hello-node-75c85bcc94-5p4sk -n default: exit status 1 (62.722303ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-5p4sk" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-334592 logs hello-node-75c85bcc94-5p4sk -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.59s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image load --daemon kicbase/echo-server:functional-334592 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-334592" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image load --daemon kicbase/echo-server:functional-334592 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-334592" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-334592
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image load --daemon kicbase/echo-server:functional-334592 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-334592" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image save kicbase/echo-server:functional-334592 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1124 13:22:01.184537  387306 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:22:01.185163  387306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:01.185177  387306 out.go:374] Setting ErrFile to fd 2...
	I1124 13:22:01.185185  387306 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:01.185620  387306 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:22:01.186821  387306 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:22:01.187014  387306 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:22:01.187598  387306 cli_runner.go:164] Run: docker container inspect functional-334592 --format={{.State.Status}}
	I1124 13:22:01.204176  387306 ssh_runner.go:195] Run: systemctl --version
	I1124 13:22:01.204214  387306 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-334592
	I1124 13:22:01.219503  387306 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/functional-334592/id_rsa Username:docker}
	I1124 13:22:01.322288  387306 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1124 13:22:01.322366  387306 cache_images.go:255] Failed to load cached images for "functional-334592": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1124 13:22:01.322388  387306 cache_images.go:267] failed pushing to: functional-334592

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-334592
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image save --daemon kicbase/echo-server:functional-334592 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-334592
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-334592: exit status 1 (16.652166ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-334592

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-334592

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 service --namespace=default --https --url hello-node: exit status 115 (526.527737ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:32243
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-334592 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 service hello-node --url --format={{.IP}}: exit status 115 (526.660819ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-334592 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 service hello-node --url: exit status 115 (524.585983ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:32243
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-334592 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:32243
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.52s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.29s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-833174 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-833174 --output=json --user=testUser: exit status 80 (2.285882124s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a94f0b42-caa7-49cc-9f9a-7f3452e7b1da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-833174 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"6ca563b6-6957-4afd-b31a-b9f4b18433d6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T13:41:39Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2d344688-3e31-4357-993d-e32dbe9fc555","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-833174 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.29s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.79s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-833174 --output=json --user=testUser
E1124 13:41:39.789409  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-833174 --output=json --user=testUser: exit status 80 (1.789433513s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"eeb7e688-e8fb-498b-a7b1-c865d4be6430","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-833174 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"92b837fe-8c20-441c-bebb-7d1802ab8fb0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-24T13:41:41Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"5281e1a1-65a9-4881-bfe1-67d24f685353","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-833174 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.79s)

                                                
                                    
x
+
TestPause/serial/Pause (6.26s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-677692 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-677692 --alsologtostderr -v=5: exit status 80 (2.431801882s)

                                                
                                                
-- stdout --
	* Pausing node pause-677692 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:54:41.571754  533760 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:54:41.571879  533760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:54:41.571903  533760 out.go:374] Setting ErrFile to fd 2...
	I1124 13:54:41.571910  533760 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:54:41.572234  533760 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:54:41.572506  533760 out.go:368] Setting JSON to false
	I1124 13:54:41.572532  533760 mustload.go:66] Loading cluster: pause-677692
	I1124 13:54:41.573158  533760 config.go:182] Loaded profile config "pause-677692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:54:41.573727  533760 cli_runner.go:164] Run: docker container inspect pause-677692 --format={{.State.Status}}
	I1124 13:54:41.593361  533760 host.go:66] Checking if "pause-677692" exists ...
	I1124 13:54:41.593592  533760 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:54:41.656384  533760 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 13:54:41.645542601 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:54:41.657433  533760 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-677692 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 13:54:41.659970  533760 out.go:179] * Pausing node pause-677692 ... 
	I1124 13:54:41.661245  533760 host.go:66] Checking if "pause-677692" exists ...
	I1124 13:54:41.661569  533760 ssh_runner.go:195] Run: systemctl --version
	I1124 13:54:41.661625  533760 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-677692
	I1124 13:54:41.680248  533760 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33358 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/pause-677692/id_rsa Username:docker}
	I1124 13:54:41.783347  533760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:54:41.795628  533760 pause.go:52] kubelet running: true
	I1124 13:54:41.795687  533760 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:54:41.932169  533760 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:54:41.932275  533760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:54:42.008513  533760 cri.go:89] found id: "5bd1064f94f908ca624e098766749ad47d88e5caa1c1787e882cbbb3ee3e2eaa"
	I1124 13:54:42.008553  533760 cri.go:89] found id: "956bc3ed3cc94622db07a7722f51f57af133c564694a55227687053239cc2cce"
	I1124 13:54:42.008559  533760 cri.go:89] found id: "c31417b0ddd4318cf9df33e13d996a1aa39754a7bf2a765270afade9daa1ffb6"
	I1124 13:54:42.008563  533760 cri.go:89] found id: "d03839e7fa9eac10fedbb04845ef30ffaec45efb2be489888bf2968bcb45ac09"
	I1124 13:54:42.008567  533760 cri.go:89] found id: "0d00cabc63cac09f0e493592d5429cf1e5bbb136da910238c780141b94e18b0f"
	I1124 13:54:42.008571  533760 cri.go:89] found id: "8c549f2828eda0f02b6f9f194ff2300c57a32d274a9d673bace362d09c6a71b7"
	I1124 13:54:42.008575  533760 cri.go:89] found id: "e5b8c93a0b3b6ce571718079c629004228b713c38451c35ff4b7ee0ca30c79da"
	I1124 13:54:42.008579  533760 cri.go:89] found id: ""
	I1124 13:54:42.008646  533760 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:54:42.021671  533760 retry.go:31] will retry after 291.885782ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:54:42Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:54:42.314116  533760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:54:42.328849  533760 pause.go:52] kubelet running: false
	I1124 13:54:42.328929  533760 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:54:42.459129  533760 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:54:42.459240  533760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:54:42.533111  533760 cri.go:89] found id: "5bd1064f94f908ca624e098766749ad47d88e5caa1c1787e882cbbb3ee3e2eaa"
	I1124 13:54:42.533136  533760 cri.go:89] found id: "956bc3ed3cc94622db07a7722f51f57af133c564694a55227687053239cc2cce"
	I1124 13:54:42.533142  533760 cri.go:89] found id: "c31417b0ddd4318cf9df33e13d996a1aa39754a7bf2a765270afade9daa1ffb6"
	I1124 13:54:42.533147  533760 cri.go:89] found id: "d03839e7fa9eac10fedbb04845ef30ffaec45efb2be489888bf2968bcb45ac09"
	I1124 13:54:42.533151  533760 cri.go:89] found id: "0d00cabc63cac09f0e493592d5429cf1e5bbb136da910238c780141b94e18b0f"
	I1124 13:54:42.533156  533760 cri.go:89] found id: "8c549f2828eda0f02b6f9f194ff2300c57a32d274a9d673bace362d09c6a71b7"
	I1124 13:54:42.533160  533760 cri.go:89] found id: "e5b8c93a0b3b6ce571718079c629004228b713c38451c35ff4b7ee0ca30c79da"
	I1124 13:54:42.533164  533760 cri.go:89] found id: ""
	I1124 13:54:42.533203  533760 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:54:42.545391  533760 retry.go:31] will retry after 479.259452ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:54:42Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:54:43.025012  533760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:54:43.039763  533760 pause.go:52] kubelet running: false
	I1124 13:54:43.039829  533760 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:54:43.172481  533760 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:54:43.172569  533760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:54:43.245247  533760 cri.go:89] found id: "5bd1064f94f908ca624e098766749ad47d88e5caa1c1787e882cbbb3ee3e2eaa"
	I1124 13:54:43.245274  533760 cri.go:89] found id: "956bc3ed3cc94622db07a7722f51f57af133c564694a55227687053239cc2cce"
	I1124 13:54:43.245280  533760 cri.go:89] found id: "c31417b0ddd4318cf9df33e13d996a1aa39754a7bf2a765270afade9daa1ffb6"
	I1124 13:54:43.245285  533760 cri.go:89] found id: "d03839e7fa9eac10fedbb04845ef30ffaec45efb2be489888bf2968bcb45ac09"
	I1124 13:54:43.245289  533760 cri.go:89] found id: "0d00cabc63cac09f0e493592d5429cf1e5bbb136da910238c780141b94e18b0f"
	I1124 13:54:43.245295  533760 cri.go:89] found id: "8c549f2828eda0f02b6f9f194ff2300c57a32d274a9d673bace362d09c6a71b7"
	I1124 13:54:43.245299  533760 cri.go:89] found id: "e5b8c93a0b3b6ce571718079c629004228b713c38451c35ff4b7ee0ca30c79da"
	I1124 13:54:43.245302  533760 cri.go:89] found id: ""
	I1124 13:54:43.245339  533760 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:54:43.258507  533760 retry.go:31] will retry after 411.091886ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:54:43Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:54:43.670020  533760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:54:43.685477  533760 pause.go:52] kubelet running: false
	I1124 13:54:43.685542  533760 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:54:43.814739  533760 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:54:43.814833  533760 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:54:43.892550  533760 cri.go:89] found id: "5bd1064f94f908ca624e098766749ad47d88e5caa1c1787e882cbbb3ee3e2eaa"
	I1124 13:54:43.892595  533760 cri.go:89] found id: "956bc3ed3cc94622db07a7722f51f57af133c564694a55227687053239cc2cce"
	I1124 13:54:43.892602  533760 cri.go:89] found id: "c31417b0ddd4318cf9df33e13d996a1aa39754a7bf2a765270afade9daa1ffb6"
	I1124 13:54:43.892607  533760 cri.go:89] found id: "d03839e7fa9eac10fedbb04845ef30ffaec45efb2be489888bf2968bcb45ac09"
	I1124 13:54:43.892612  533760 cri.go:89] found id: "0d00cabc63cac09f0e493592d5429cf1e5bbb136da910238c780141b94e18b0f"
	I1124 13:54:43.892616  533760 cri.go:89] found id: "8c549f2828eda0f02b6f9f194ff2300c57a32d274a9d673bace362d09c6a71b7"
	I1124 13:54:43.892621  533760 cri.go:89] found id: "e5b8c93a0b3b6ce571718079c629004228b713c38451c35ff4b7ee0ca30c79da"
	I1124 13:54:43.892626  533760 cri.go:89] found id: ""
	I1124 13:54:43.892681  533760 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:54:43.912443  533760 out.go:203] 
	W1124 13:54:43.913869  533760 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:54:43Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:54:43Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:54:43.913901  533760 out.go:285] * 
	* 
	W1124 13:54:43.922931  533760 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:54:43.924361  533760 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-677692 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-677692
helpers_test.go:243: (dbg) docker inspect pause-677692:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5",
	        "Created": "2025-11-24T13:54:00.272033103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 522970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:54:00.326136431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/hosts",
	        "LogPath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5-json.log",
	        "Name": "/pause-677692",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-677692:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-677692",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5",
	                "LowerDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-677692",
	                "Source": "/var/lib/docker/volumes/pause-677692/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-677692",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-677692",
	                "name.minikube.sigs.k8s.io": "pause-677692",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb8c03e9c231c5bfcebbfce7ad087a9b273ac43b35ffd9cd75f5ec41ccc22f77",
	            "SandboxKey": "/var/run/docker/netns/fb8c03e9c231",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-677692": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb7498ead568e19b542ea87402aa7be66c83f917e9911cce43d5db518ef94dfd",
	                    "EndpointID": "f6c23b47ecf66628788ac20c4e99e59fe0536c35c6b1e1c2de02650331aab439",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:5d:2a:60:67:f6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-677692",
	                        "339feee1898d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-677692 -n pause-677692
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-677692 -n pause-677692: exit status 2 (366.379594ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-677692 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --cancel-scheduled                                                                           │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:53 UTC │
	│ delete  │ -p scheduled-stop-866823                                                                                              │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:53 UTC │
	│ start   │ -p insufficient-storage-676419 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-676419 │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ delete  │ -p insufficient-storage-676419                                                                                        │ insufficient-storage-676419 │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:53 UTC │
	│ start   │ -p pause-677692 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-677692                │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p force-systemd-env-699216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-699216    │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p offline-crio-669749 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio     │ offline-crio-669749         │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:54 UTC │
	│ delete  │ -p force-systemd-env-699216                                                                                           │ force-systemd-env-699216    │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p stopped-upgrade-040555 --memory=3072 --vm-driver=docker  --container-runtime=crio                                  │ stopped-upgrade-040555      │ jenkins │ v1.32.0 │ 24 Nov 25 13:54 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p pause-677692 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-677692                │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ delete  │ -p NoKubernetes-940104                                                                                                │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ pause   │ -p pause-677692 --alsologtostderr -v=5                                                                                │ pause-677692                │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:54:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:54:42.571377  534179 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:54:42.571657  534179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:54:42.571672  534179 out.go:374] Setting ErrFile to fd 2...
	I1124 13:54:42.571679  534179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:54:42.571990  534179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:54:42.572552  534179 out.go:368] Setting JSON to false
	I1124 13:54:42.573920  534179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9429,"bootTime":1763983053,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:54:42.574000  534179 start.go:143] virtualization: kvm guest
	I1124 13:54:42.575614  534179 out.go:179] * [NoKubernetes-940104] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:54:42.576992  534179 notify.go:221] Checking for updates...
	I1124 13:54:42.576997  534179 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:54:42.578005  534179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:54:42.579418  534179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:54:42.580634  534179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:54:42.581767  534179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:54:42.582844  534179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:54:42.584457  534179 config.go:182] Loaded profile config "offline-crio-669749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:54:42.584626  534179 config.go:182] Loaded profile config "pause-677692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:54:42.584752  534179 config.go:182] Loaded profile config "stopped-upgrade-040555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 13:54:42.584783  534179 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1124 13:54:42.584933  534179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:54:42.610508  534179 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:54:42.610618  534179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:54:42.666603  534179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 13:54:42.656737784 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:54:42.666715  534179 docker.go:319] overlay module found
	I1124 13:54:42.668411  534179 out.go:179] * Using the docker driver based on user configuration
	I1124 13:54:42.669533  534179 start.go:309] selected driver: docker
	I1124 13:54:42.669553  534179 start.go:927] validating driver "docker" against <nil>
	I1124 13:54:42.669567  534179 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:54:42.670340  534179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:54:42.726071  534179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 13:54:42.716522379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:54:42.726174  534179 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1124 13:54:42.726246  534179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:54:42.726473  534179 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:54:42.728015  534179 out.go:179] * Using Docker driver with root privileges
	I1124 13:54:42.729156  534179 cni.go:84] Creating CNI manager for ""
	I1124 13:54:42.729228  534179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:54:42.729241  534179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:54:42.729271  534179 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1124 13:54:42.729327  534179 start.go:353] cluster config:
	{Name:NoKubernetes-940104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-940104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:54:42.730495  534179 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-940104
	I1124 13:54:42.731630  534179 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:54:42.732681  534179 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:54:42.733681  534179 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1124 13:54:42.733793  534179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:54:42.733822  534179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/NoKubernetes-940104/config.json ...
	I1124 13:54:42.733862  534179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/NoKubernetes-940104/config.json: {Name:mkbcf1fee9611b12497eb768abae4aaf5f800bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:54:42.753951  534179 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:54:42.753972  534179 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:54:42.753987  534179 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:54:42.754030  534179 start.go:360] acquireMachinesLock for NoKubernetes-940104: {Name:mk8883e63cf26f459c4f56710c0c0c77bbc53121 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:54:42.754092  534179 start.go:364] duration metric: took 43.147µs to acquireMachinesLock for "NoKubernetes-940104"
	I1124 13:54:42.754111  534179 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-940104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-940104 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:54:42.754186  534179 start.go:125] createHost starting for "" (driver="docker")
	W1124 13:54:38.304289  520926 node_ready.go:57] node "offline-crio-669749" has "Ready":"False" status (will retry)
	W1124 13:54:40.804475  520926 node_ready.go:57] node "offline-crio-669749" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.24208876Z" level=info msg="RDT not available in the host system"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.24210912Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243164885Z" level=info msg="Conmon does support the --sync option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243184239Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243196835Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.24393212Z" level=info msg="Conmon does support the --sync option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243945592Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.247873439Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.247911147Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.248515019Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.248877972Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.248958734Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322242073Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-h5vz5 Namespace:kube-system ID:672b6a9370af662ffa378413e9b3f3da946fabdd7e075eae822d04a932869f78 UID:682c48a1-83ba-4047-a2c5-13409b3a964e NetNS:/var/run/netns/31033811-f430-437e-8294-904db3d25b21 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000e0080}] Aliases:map[]}"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322393817Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-h5vz5 for CNI network kindnet (type=ptp)"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322800944Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.32282781Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322871502Z" level=info msg="Create NRI interface"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322973632Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.32298171Z" level=info msg="runtime interface created"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322991106Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322996419Z" level=info msg="runtime interface starting up..."
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.323001642Z" level=info msg="starting plugins..."
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.323012194Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.323306876Z" level=info msg="No systemd watchdog enabled"
	Nov 24 13:54:38 pause-677692 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5bd1064f94f90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   672b6a9370af6       coredns-66bc5c9577-h5vz5               kube-system
	956bc3ed3cc94       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   f5bc70bc71b5c       kindnet-pwn88                          kube-system
	c31417b0ddd43       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   4382222256ccd       kube-proxy-f6gbx                       kube-system
	d03839e7fa9ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   725ae5c65d6f2       kube-apiserver-pause-677692            kube-system
	0d00cabc63cac       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   1af4751b61bc3       etcd-pause-677692                      kube-system
	8c549f2828eda       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   2100ba53b72f5       kube-controller-manager-pause-677692   kube-system
	e5b8c93a0b3b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   b2a530f55d31d       kube-scheduler-pause-677692            kube-system
	
	
	==> coredns [5bd1064f94f908ca624e098766749ad47d88e5caa1c1787e882cbbb3ee3e2eaa] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40233 - 31718 "HINFO IN 856722982997275619.5500299172942250344. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.108881979s
	
	
	==> describe nodes <==
	Name:               pause-677692
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-677692
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=pause-677692
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_54_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:54:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-677692
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:54:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-677692
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                8a283b5d-e477-471a-8db6-9d719cf640e5
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-h5vz5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-pause-677692                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-pwn88                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-677692             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-677692    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-f6gbx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-677692             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 30s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-677692 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-677692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-677692 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node pause-677692 event: Registered Node pause-677692 in Controller
	  Normal  NodeReady                13s   kubelet          Node pause-677692 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [0d00cabc63cac09f0e493592d5429cf1e5bbb136da910238c780141b94e18b0f] <==
	{"level":"warn","ts":"2025-11-24T13:54:11.409480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.418923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.426800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.434434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.443254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.453123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.464833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.472546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.485175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.498541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.517869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.526641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.538061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.546509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.555037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.563635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.572085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.581101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.589032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.597297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.604952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.618353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.626683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.635301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.694238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56650","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:54:45 up  2:37,  0 user,  load average: 4.43, 1.94, 1.31
	Linux pause-677692 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [956bc3ed3cc94622db07a7722f51f57af133c564694a55227687053239cc2cce] <==
	I1124 13:54:20.491750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:54:20.492182       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:54:20.492337       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:54:20.492350       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:54:20.492372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:54:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:54:20.789769       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:54:20.789799       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:54:20.789812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:54:20.789952       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:54:21.290222       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:54:21.290250       1 metrics.go:72] Registering metrics
	I1124 13:54:21.290312       1 controller.go:711] "Syncing nftables rules"
	I1124 13:54:30.789730       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:54:30.789783       1 main.go:301] handling current node
	I1124 13:54:40.796536       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:54:40.796581       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d03839e7fa9eac10fedbb04845ef30ffaec45efb2be489888bf2968bcb45ac09] <==
	I1124 13:54:12.336125       1 policy_source.go:240] refreshing policies
	E1124 13:54:12.349003       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 13:54:12.396183       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:54:12.396484       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:54:12.396548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:12.400481       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:12.400737       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:54:12.527861       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:54:13.200413       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:54:13.206465       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:54:13.206539       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:54:13.749879       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:54:13.792252       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:54:13.901957       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:54:13.908627       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:54:13.909903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:54:13.914359       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:54:14.245681       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:54:15.051846       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:54:15.061387       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:54:15.069174       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:54:19.841656       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:19.846713       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:19.888831       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:54:20.189514       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8c549f2828eda0f02b6f9f194ff2300c57a32d274a9d673bace362d09c6a71b7] <==
	I1124 13:54:19.199291       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:54:19.204771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:54:19.204821       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:54:19.205987       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:54:19.206053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:54:19.207248       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:54:19.214506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:54:19.236035       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:54:19.236060       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:54:19.236082       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:54:19.236409       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:54:19.237309       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:54:19.237340       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:54:19.237360       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:54:19.237392       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:54:19.237435       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:54:19.237480       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 13:54:19.238919       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:54:19.239702       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:54:19.240538       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:54:19.241621       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:54:19.241724       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:54:19.250964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:54:19.257252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:54:34.193345       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c31417b0ddd4318cf9df33e13d996a1aa39754a7bf2a765270afade9daa1ffb6] <==
	I1124 13:54:20.326019       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:54:20.387257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:54:20.488232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:54:20.488270       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 13:54:20.488355       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:54:20.509618       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:54:20.509673       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:54:20.515658       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:54:20.516046       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:54:20.516082       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:54:20.517852       1 config.go:309] "Starting node config controller"
	I1124 13:54:20.517932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:54:20.518288       1 config.go:200] "Starting service config controller"
	I1124 13:54:20.518463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:54:20.518291       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:54:20.518491       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:54:20.518316       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:54:20.518502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:54:20.618862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:54:20.618903       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:54:20.618933       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:54:20.618928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e5b8c93a0b3b6ce571718079c629004228b713c38451c35ff4b7ee0ca30c79da] <==
	I1124 13:54:12.846082       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:54:12.849098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:54:12.849144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:54:12.849268       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:54:12.849356       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 13:54:12.853009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:54:12.856003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:54:12.856133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:54:12.856165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:54:12.856231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:54:12.856270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:54:12.856325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:54:12.856376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:54:12.856480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:54:12.856495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:54:12.856544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:54:12.856687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:54:12.856816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:54:12.856853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:54:12.856974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:54:12.857025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:54:12.857148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:54:12.857376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:54:12.857423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1124 13:54:14.149688       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918433    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/904cdb01-3591-46e1-92f5-b11cc884ee75-lib-modules\") pod \"kindnet-pwn88\" (UID: \"904cdb01-3591-46e1-92f5-b11cc884ee75\") " pod="kube-system/kindnet-pwn88"
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918454    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvfw\" (UniqueName: \"kubernetes.io/projected/904cdb01-3591-46e1-92f5-b11cc884ee75-kube-api-access-xvvfw\") pod \"kindnet-pwn88\" (UID: \"904cdb01-3591-46e1-92f5-b11cc884ee75\") " pod="kube-system/kindnet-pwn88"
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918489    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0-lib-modules\") pod \"kube-proxy-f6gbx\" (UID: \"cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0\") " pod="kube-system/kube-proxy-f6gbx"
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918512    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522fp\" (UniqueName: \"kubernetes.io/projected/cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0-kube-api-access-522fp\") pod \"kube-proxy-f6gbx\" (UID: \"cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0\") " pod="kube-system/kube-proxy-f6gbx"
	Nov 24 13:54:21 pause-677692 kubelet[1295]: I1124 13:54:21.062737    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pwn88" podStartSLOduration=2.062718392 podStartE2EDuration="2.062718392s" podCreationTimestamp="2025-11-24 13:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:54:21.062515249 +0000 UTC m=+6.168270601" watchObservedRunningTime="2025-11-24 13:54:21.062718392 +0000 UTC m=+6.168473744"
	Nov 24 13:54:21 pause-677692 kubelet[1295]: I1124 13:54:21.361357    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f6gbx" podStartSLOduration=2.36133694 podStartE2EDuration="2.36133694s" podCreationTimestamp="2025-11-24 13:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:54:21.072292696 +0000 UTC m=+6.178048042" watchObservedRunningTime="2025-11-24 13:54:21.36133694 +0000 UTC m=+6.467092293"
	Nov 24 13:54:31 pause-677692 kubelet[1295]: I1124 13:54:31.335923    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:54:31 pause-677692 kubelet[1295]: I1124 13:54:31.396488    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpctz\" (UniqueName: \"kubernetes.io/projected/682c48a1-83ba-4047-a2c5-13409b3a964e-kube-api-access-bpctz\") pod \"coredns-66bc5c9577-h5vz5\" (UID: \"682c48a1-83ba-4047-a2c5-13409b3a964e\") " pod="kube-system/coredns-66bc5c9577-h5vz5"
	Nov 24 13:54:31 pause-677692 kubelet[1295]: I1124 13:54:31.396533    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/682c48a1-83ba-4047-a2c5-13409b3a964e-config-volume\") pod \"coredns-66bc5c9577-h5vz5\" (UID: \"682c48a1-83ba-4047-a2c5-13409b3a964e\") " pod="kube-system/coredns-66bc5c9577-h5vz5"
	Nov 24 13:54:32 pause-677692 kubelet[1295]: I1124 13:54:32.101255    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h5vz5" podStartSLOduration=12.101235716 podStartE2EDuration="12.101235716s" podCreationTimestamp="2025-11-24 13:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:54:32.089155945 +0000 UTC m=+17.194911297" watchObservedRunningTime="2025-11-24 13:54:32.101235716 +0000 UTC m=+17.206991067"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.084257    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: E1124 13:54:36.084338    1295 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: E1124 13:54:36.084382    1295 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:36 pause-677692 kubelet[1295]: E1124 13:54:36.084398    1295 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.184719    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.339673    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.605802    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022761    1295 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022840    1295 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022855    1295 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022867    1295 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:41 pause-677692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 13:54:41 pause-677692 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 13:54:41 pause-677692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 13:54:41 pause-677692 systemd[1]: kubelet.service: Consumed 1.109s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-677692 -n pause-677692
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-677692 -n pause-677692: exit status 2 (354.35409ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-677692 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-677692
helpers_test.go:243: (dbg) docker inspect pause-677692:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5",
	        "Created": "2025-11-24T13:54:00.272033103Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 522970,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:54:00.326136431Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/hostname",
	        "HostsPath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/hosts",
	        "LogPath": "/var/lib/docker/containers/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5/339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5-json.log",
	        "Name": "/pause-677692",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-677692:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-677692",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "339feee1898dc478d0d8a1a7724180e229dfe2f593bb67ddd2a289d15fe8e9b5",
	                "LowerDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83/merged",
	                "UpperDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83/diff",
	                "WorkDir": "/var/lib/docker/overlay2/57cc918ca35c0ab88cb2fb6d76b5fea09e47e27d2e09386feb4ce2a6d2c4ab83/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-677692",
	                "Source": "/var/lib/docker/volumes/pause-677692/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-677692",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-677692",
	                "name.minikube.sigs.k8s.io": "pause-677692",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "fb8c03e9c231c5bfcebbfce7ad087a9b273ac43b35ffd9cd75f5ec41ccc22f77",
	            "SandboxKey": "/var/run/docker/netns/fb8c03e9c231",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33358"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33359"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33362"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33360"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33361"
	                    }
	                ]
	            },
	            "Networks": {
	                "pause-677692": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "cb7498ead568e19b542ea87402aa7be66c83f917e9911cce43d5db518ef94dfd",
	                    "EndpointID": "f6c23b47ecf66628788ac20c4e99e59fe0536c35c6b1e1c2de02650331aab439",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e6:5d:2a:60:67:f6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-677692",
	                        "339feee1898d"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-677692 -n pause-677692
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-677692 -n pause-677692: exit status 2 (371.835079ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-677692 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p pause-677692 logs -n 25: (1.286489749s)
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬─────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                         ARGS                                                          │           PROFILE           │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼─────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ stop    │ -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr                                                         │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --cancel-scheduled                                                                           │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:52 UTC │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │                     │
	│ stop    │ -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr                                                        │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:52 UTC │ 24 Nov 25 13:53 UTC │
	│ delete  │ -p scheduled-stop-866823                                                                                              │ scheduled-stop-866823       │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:53 UTC │
	│ start   │ -p insufficient-storage-676419 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio      │ insufficient-storage-676419 │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ delete  │ -p insufficient-storage-676419                                                                                        │ insufficient-storage-676419 │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:53 UTC │
	│ start   │ -p pause-677692 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio             │ pause-677692                │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p force-systemd-env-699216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio            │ force-systemd-env-699216    │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p offline-crio-669749 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio     │ offline-crio-669749         │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio         │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio                 │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:53 UTC │ 24 Nov 25 13:54 UTC │
	│ delete  │ -p force-systemd-env-699216                                                                                           │ force-systemd-env-699216    │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p stopped-upgrade-040555 --memory=3072 --vm-driver=docker  --container-runtime=crio                                  │ stopped-upgrade-040555      │ jenkins │ v1.32.0 │ 24 Nov 25 13:54 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ start   │ -p pause-677692 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                      │ pause-677692                │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ delete  │ -p NoKubernetes-940104                                                                                                │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │ 24 Nov 25 13:54 UTC │
	│ pause   │ -p pause-677692 --alsologtostderr -v=5                                                                                │ pause-677692                │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │                     │
	│ start   │ -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio │ NoKubernetes-940104         │ jenkins │ v1.37.0 │ 24 Nov 25 13:54 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴─────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:54:42
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:54:42.571377  534179 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:54:42.571657  534179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:54:42.571672  534179 out.go:374] Setting ErrFile to fd 2...
	I1124 13:54:42.571679  534179 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:54:42.571990  534179 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:54:42.572552  534179 out.go:368] Setting JSON to false
	I1124 13:54:42.573920  534179 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9429,"bootTime":1763983053,"procs":271,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:54:42.574000  534179 start.go:143] virtualization: kvm guest
	I1124 13:54:42.575614  534179 out.go:179] * [NoKubernetes-940104] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:54:42.576992  534179 notify.go:221] Checking for updates...
	I1124 13:54:42.576997  534179 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:54:42.578005  534179 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:54:42.579418  534179 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:54:42.580634  534179 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:54:42.581767  534179 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:54:42.582844  534179 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:54:42.584457  534179 config.go:182] Loaded profile config "offline-crio-669749": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:54:42.584626  534179 config.go:182] Loaded profile config "pause-677692": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:54:42.584752  534179 config.go:182] Loaded profile config "stopped-upgrade-040555": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 13:54:42.584783  534179 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1124 13:54:42.584933  534179 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:54:42.610508  534179 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:54:42.610618  534179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:54:42.666603  534179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 13:54:42.656737784 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:54:42.666715  534179 docker.go:319] overlay module found
	I1124 13:54:42.668411  534179 out.go:179] * Using the docker driver based on user configuration
	I1124 13:54:42.669533  534179 start.go:309] selected driver: docker
	I1124 13:54:42.669553  534179 start.go:927] validating driver "docker" against <nil>
	I1124 13:54:42.669567  534179 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:54:42.670340  534179 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:54:42.726071  534179 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-24 13:54:42.716522379 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:54:42.726174  534179 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1124 13:54:42.726246  534179 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:54:42.726473  534179 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:54:42.728015  534179 out.go:179] * Using Docker driver with root privileges
	I1124 13:54:42.729156  534179 cni.go:84] Creating CNI manager for ""
	I1124 13:54:42.729228  534179 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:54:42.729241  534179 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:54:42.729271  534179 start.go:1901] No Kubernetes flag is set, setting Kubernetes version to v0.0.0
	I1124 13:54:42.729327  534179 start.go:353] cluster config:
	{Name:NoKubernetes-940104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-940104 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:54:42.730495  534179 out.go:179] * Starting minikube without Kubernetes in cluster NoKubernetes-940104
	I1124 13:54:42.731630  534179 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:54:42.732681  534179 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:54:42.733681  534179 cache.go:59] Skipping Kubernetes image caching due to --no-kubernetes flag
	I1124 13:54:42.733793  534179 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:54:42.733822  534179 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/NoKubernetes-940104/config.json ...
	I1124 13:54:42.733862  534179 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/NoKubernetes-940104/config.json: {Name:mkbcf1fee9611b12497eb768abae4aaf5f800bd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:54:42.753951  534179 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:54:42.753972  534179 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:54:42.753987  534179 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:54:42.754030  534179 start.go:360] acquireMachinesLock for NoKubernetes-940104: {Name:mk8883e63cf26f459c4f56710c0c0c77bbc53121 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:54:42.754092  534179 start.go:364] duration metric: took 43.147µs to acquireMachinesLock for "NoKubernetes-940104"
	I1124 13:54:42.754111  534179 start.go:93] Provisioning new machine with config: &{Name:NoKubernetes-940104 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v0.0.0 ClusterName:NoKubernetes-940104 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: S
SHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v0.0.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:54:42.754186  534179 start.go:125] createHost starting for "" (driver="docker")
	W1124 13:54:38.304289  520926 node_ready.go:57] node "offline-crio-669749" has "Ready":"False" status (will retry)
	W1124 13:54:40.804475  520926 node_ready.go:57] node "offline-crio-669749" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.24208876Z" level=info msg="RDT not available in the host system"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.24210912Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243164885Z" level=info msg="Conmon does support the --sync option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243184239Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243196835Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.24393212Z" level=info msg="Conmon does support the --sync option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.243945592Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.247873439Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.247911147Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.248515019Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.248877972Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.248958734Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322242073Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-h5vz5 Namespace:kube-system ID:672b6a9370af662ffa378413e9b3f3da946fabdd7e075eae822d04a932869f78 UID:682c48a1-83ba-4047-a2c5-13409b3a964e NetNS:/var/run/netns/31033811-f430-437e-8294-904db3d25b21 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0000e0080}] Aliases:map[]}"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322393817Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-h5vz5 for CNI network kindnet (type=ptp)"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322800944Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.32282781Z" level=info msg="Starting seccomp notifier watcher"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322871502Z" level=info msg="Create NRI interface"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322973632Z" level=info msg="built-in NRI default validator is disabled"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.32298171Z" level=info msg="runtime interface created"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322991106Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.322996419Z" level=info msg="runtime interface starting up..."
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.323001642Z" level=info msg="starting plugins..."
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.323012194Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 24 13:54:38 pause-677692 crio[2138]: time="2025-11-24T13:54:38.323306876Z" level=info msg="No systemd watchdog enabled"
	Nov 24 13:54:38 pause-677692 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	5bd1064f94f90       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   15 seconds ago      Running             coredns                   0                   672b6a9370af6       coredns-66bc5c9577-h5vz5               kube-system
	956bc3ed3cc94       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   26 seconds ago      Running             kindnet-cni               0                   f5bc70bc71b5c       kindnet-pwn88                          kube-system
	c31417b0ddd43       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   26 seconds ago      Running             kube-proxy                0                   4382222256ccd       kube-proxy-f6gbx                       kube-system
	d03839e7fa9ea       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   36 seconds ago      Running             kube-apiserver            0                   725ae5c65d6f2       kube-apiserver-pause-677692            kube-system
	0d00cabc63cac       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   36 seconds ago      Running             etcd                      0                   1af4751b61bc3       etcd-pause-677692                      kube-system
	8c549f2828eda       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   36 seconds ago      Running             kube-controller-manager   0                   2100ba53b72f5       kube-controller-manager-pause-677692   kube-system
	e5b8c93a0b3b6       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   36 seconds ago      Running             kube-scheduler            0                   b2a530f55d31d       kube-scheduler-pause-677692            kube-system
	
	
	==> coredns [5bd1064f94f908ca624e098766749ad47d88e5caa1c1787e882cbbb3ee3e2eaa] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:40233 - 31718 "HINFO IN 856722982997275619.5500299172942250344. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.108881979s
	
	
	==> describe nodes <==
	Name:               pause-677692
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-677692
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=pause-677692
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_54_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:54:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-677692
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:54:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:54:31 +0000   Mon, 24 Nov 2025 13:54:31 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    pause-677692
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                8a283b5d-e477-471a-8db6-9d719cf640e5
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-h5vz5                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-pause-677692                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-pwn88                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-pause-677692             250m (3%)     0 (0%)      0 (0%)           0 (0%)         34s
	  kube-system                 kube-controller-manager-pause-677692    200m (2%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-proxy-f6gbx                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-pause-677692             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 26s   kube-proxy       
	  Normal  Starting                 33s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s   kubelet          Node pause-677692 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s   kubelet          Node pause-677692 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s   kubelet          Node pause-677692 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s   node-controller  Node pause-677692 event: Registered Node pause-677692 in Controller
	  Normal  NodeReady                16s   kubelet          Node pause-677692 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [0d00cabc63cac09f0e493592d5429cf1e5bbb136da910238c780141b94e18b0f] <==
	{"level":"warn","ts":"2025-11-24T13:54:11.409480Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56250","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.418923Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.426800Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.434434Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56298","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.443254Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56318","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.453123Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.464833Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.472546Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56376","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.485175Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56392","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.498541Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.517869Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56416","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.526641Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.538061Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.546509Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56480","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.555037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.563635Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.572085Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56514","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.581101Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.589032Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56548","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.597297Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56560","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.604952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.618353Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56590","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.626683Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.635301Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56630","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:54:11.694238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56650","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:54:47 up  2:37,  0 user,  load average: 4.63, 2.03, 1.34
	Linux pause-677692 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [956bc3ed3cc94622db07a7722f51f57af133c564694a55227687053239cc2cce] <==
	I1124 13:54:20.491750       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:54:20.492182       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:54:20.492337       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:54:20.492350       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:54:20.492372       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:54:20Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:54:20.789769       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:54:20.789799       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:54:20.789812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:54:20.789952       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:54:21.290222       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:54:21.290250       1 metrics.go:72] Registering metrics
	I1124 13:54:21.290312       1 controller.go:711] "Syncing nftables rules"
	I1124 13:54:30.789730       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:54:30.789783       1 main.go:301] handling current node
	I1124 13:54:40.796536       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:54:40.796581       1 main.go:301] handling current node
	
	
	==> kube-apiserver [d03839e7fa9eac10fedbb04845ef30ffaec45efb2be489888bf2968bcb45ac09] <==
	I1124 13:54:12.336125       1 policy_source.go:240] refreshing policies
	E1124 13:54:12.349003       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	I1124 13:54:12.396183       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:54:12.396484       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:54:12.396548       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:12.400481       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:12.400737       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:54:12.527861       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:54:13.200413       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:54:13.206465       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:54:13.206539       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:54:13.749879       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:54:13.792252       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:54:13.901957       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:54:13.908627       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:54:13.909903       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:54:13.914359       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:54:14.245681       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:54:15.051846       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:54:15.061387       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:54:15.069174       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:54:19.841656       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:19.846713       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:54:19.888831       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:54:20.189514       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [8c549f2828eda0f02b6f9f194ff2300c57a32d274a9d673bace362d09c6a71b7] <==
	I1124 13:54:19.199291       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:54:19.204771       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:54:19.204821       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:54:19.205987       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:54:19.206053       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:54:19.207248       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:54:19.214506       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:54:19.236035       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:54:19.236060       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:54:19.236082       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:54:19.236409       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:54:19.237309       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:54:19.237340       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:54:19.237360       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:54:19.237392       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:54:19.237435       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:54:19.237480       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 13:54:19.238919       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:54:19.239702       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:54:19.240538       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:54:19.241621       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:54:19.241724       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:54:19.250964       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:54:19.257252       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:54:34.193345       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [c31417b0ddd4318cf9df33e13d996a1aa39754a7bf2a765270afade9daa1ffb6] <==
	I1124 13:54:20.326019       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:54:20.387257       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:54:20.488232       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:54:20.488270       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 13:54:20.488355       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:54:20.509618       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:54:20.509673       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:54:20.515658       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:54:20.516046       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:54:20.516082       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:54:20.517852       1 config.go:309] "Starting node config controller"
	I1124 13:54:20.517932       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:54:20.518288       1 config.go:200] "Starting service config controller"
	I1124 13:54:20.518463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:54:20.518291       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:54:20.518491       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:54:20.518316       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:54:20.518502       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:54:20.618862       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:54:20.618903       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:54:20.618933       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:54:20.618928       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [e5b8c93a0b3b6ce571718079c629004228b713c38451c35ff4b7ee0ca30c79da] <==
	I1124 13:54:12.846082       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:54:12.849098       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:54:12.849144       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:54:12.849268       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:54:12.849356       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 13:54:12.853009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:54:12.856003       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:54:12.856133       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:54:12.856165       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:54:12.856231       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:54:12.856270       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:54:12.856325       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:54:12.856376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:54:12.856480       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:54:12.856495       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:54:12.856544       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:54:12.856687       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:54:12.856816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:54:12.856853       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:54:12.856974       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:54:12.857025       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:54:12.857148       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:54:12.857376       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:54:12.857423       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	I1124 13:54:14.149688       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918433    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/904cdb01-3591-46e1-92f5-b11cc884ee75-lib-modules\") pod \"kindnet-pwn88\" (UID: \"904cdb01-3591-46e1-92f5-b11cc884ee75\") " pod="kube-system/kindnet-pwn88"
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918454    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvvfw\" (UniqueName: \"kubernetes.io/projected/904cdb01-3591-46e1-92f5-b11cc884ee75-kube-api-access-xvvfw\") pod \"kindnet-pwn88\" (UID: \"904cdb01-3591-46e1-92f5-b11cc884ee75\") " pod="kube-system/kindnet-pwn88"
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918489    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0-lib-modules\") pod \"kube-proxy-f6gbx\" (UID: \"cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0\") " pod="kube-system/kube-proxy-f6gbx"
	Nov 24 13:54:19 pause-677692 kubelet[1295]: I1124 13:54:19.918512    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-522fp\" (UniqueName: \"kubernetes.io/projected/cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0-kube-api-access-522fp\") pod \"kube-proxy-f6gbx\" (UID: \"cb35eb42-6ce0-4c8f-83d8-9baa8fae13d0\") " pod="kube-system/kube-proxy-f6gbx"
	Nov 24 13:54:21 pause-677692 kubelet[1295]: I1124 13:54:21.062737    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-pwn88" podStartSLOduration=2.062718392 podStartE2EDuration="2.062718392s" podCreationTimestamp="2025-11-24 13:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:54:21.062515249 +0000 UTC m=+6.168270601" watchObservedRunningTime="2025-11-24 13:54:21.062718392 +0000 UTC m=+6.168473744"
	Nov 24 13:54:21 pause-677692 kubelet[1295]: I1124 13:54:21.361357    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-f6gbx" podStartSLOduration=2.36133694 podStartE2EDuration="2.36133694s" podCreationTimestamp="2025-11-24 13:54:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:54:21.072292696 +0000 UTC m=+6.178048042" watchObservedRunningTime="2025-11-24 13:54:21.36133694 +0000 UTC m=+6.467092293"
	Nov 24 13:54:31 pause-677692 kubelet[1295]: I1124 13:54:31.335923    1295 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:54:31 pause-677692 kubelet[1295]: I1124 13:54:31.396488    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpctz\" (UniqueName: \"kubernetes.io/projected/682c48a1-83ba-4047-a2c5-13409b3a964e-kube-api-access-bpctz\") pod \"coredns-66bc5c9577-h5vz5\" (UID: \"682c48a1-83ba-4047-a2c5-13409b3a964e\") " pod="kube-system/coredns-66bc5c9577-h5vz5"
	Nov 24 13:54:31 pause-677692 kubelet[1295]: I1124 13:54:31.396533    1295 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/682c48a1-83ba-4047-a2c5-13409b3a964e-config-volume\") pod \"coredns-66bc5c9577-h5vz5\" (UID: \"682c48a1-83ba-4047-a2c5-13409b3a964e\") " pod="kube-system/coredns-66bc5c9577-h5vz5"
	Nov 24 13:54:32 pause-677692 kubelet[1295]: I1124 13:54:32.101255    1295 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-h5vz5" podStartSLOduration=12.101235716 podStartE2EDuration="12.101235716s" podCreationTimestamp="2025-11-24 13:54:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:54:32.089155945 +0000 UTC m=+17.194911297" watchObservedRunningTime="2025-11-24 13:54:32.101235716 +0000 UTC m=+17.206991067"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.084257    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: E1124 13:54:36.084338    1295 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="<nil>"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: E1124 13:54:36.084382    1295 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:36 pause-677692 kubelet[1295]: E1124 13:54:36.084398    1295 generic.go:254] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.184719    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.339673    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:36 pause-677692 kubelet[1295]: W1124 13:54:36.605802    1295 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "/var/run/crio/crio.sock", ServerName: "localhost", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022761    1295 log.go:32] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\"" filter="state:{}"
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022840    1295 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022855    1295 kubelet_pods.go:1266] "Error listing containers" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:37 pause-677692 kubelet[1295]: E1124 13:54:37.022867    1295 kubelet.go:2613] "Failed cleaning pods" err="rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory\""
	Nov 24 13:54:41 pause-677692 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 13:54:41 pause-677692 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 13:54:41 pause-677692 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 13:54:41 pause-677692 systemd[1]: kubelet.service: Consumed 1.109s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-677692 -n pause-677692
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-677692 -n pause-677692: exit status 2 (338.875459ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-677692 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (6.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.694539ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:58:01Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-551674 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-551674 describe deploy/metrics-server -n kube-system: exit status 1 (65.05618ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-551674 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-551674
helpers_test.go:243: (dbg) docker inspect old-k8s-version-551674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207",
	        "Created": "2025-11-24T13:57:09.159057998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 572827,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:57:09.193788903Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/hostname",
	        "HostsPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/hosts",
	        "LogPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207-json.log",
	        "Name": "/old-k8s-version-551674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-551674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-551674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207",
	                "LowerDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-551674",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-551674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-551674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-551674",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-551674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b3d24109da13fc9102a7c55b8117e03f6d857f9336670e1afe194dc98c0420a7",
	            "SandboxKey": "/var/run/docker/netns/b3d24109da13",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33428"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33429"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33432"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33430"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33431"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-551674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "584350b1ae0057925436f12a069654d8b9b77ec40acdead63d77442ee50e6e01",
	                    "EndpointID": "311b517cdbba56913e5c6cef41cdd58447fe449e8760e96891ff658e82971856",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "96:28:11:ef:0a:71",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-551674",
	                        "cffc3242ebb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-551674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-551674 logs -n 25: (1.005986071s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-165759 sudo systemctl cat kubelet --no-pager                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/kubernetes/kubelet.conf                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /var/lib/kubelet/config.yaml                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status docker --all --full --no-pager                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat docker --no-pager                                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/docker/daemon.json                                                                                                       │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo docker system info                                                                                                                │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status cri-docker --all --full --no-pager                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat cri-docker --no-pager                                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                          │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                    │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cri-dockerd --version                                                                                                             │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status containerd --all --full --no-pager                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat containerd --no-pager                                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /lib/systemd/system/containerd.service                                                                                        │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/containerd/config.toml                                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo containerd config dump                                                                                                            │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status crio --all --full --no-pager                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat crio --no-pager                                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo crio config                                                                                                                       │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ delete  │ -p cilium-165759                                                                                                                                        │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:57 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain            │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:57:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:57:10.218542  573633 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:57:10.218815  573633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:57:10.218825  573633 out.go:374] Setting ErrFile to fd 2...
	I1124 13:57:10.218830  573633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:57:10.219076  573633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:57:10.219557  573633 out.go:368] Setting JSON to false
	I1124 13:57:10.220662  573633 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9577,"bootTime":1763983053,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:57:10.220729  573633 start.go:143] virtualization: kvm guest
	I1124 13:57:10.222947  573633 out.go:179] * [no-preload-495729] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:57:10.224082  573633 notify.go:221] Checking for updates...
	I1124 13:57:10.224113  573633 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:57:10.225317  573633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:57:10.226357  573633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:10.227615  573633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:57:10.228566  573633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:57:10.229524  573633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:57:10.230910  573633 config.go:182] Loaded profile config "cert-expiration-107341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:10.231019  573633 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:10.231099  573633 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:57:10.231200  573633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:57:10.253437  573633 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:57:10.253503  573633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:57:10.307469  573633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:57:10.298427156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:57:10.307577  573633 docker.go:319] overlay module found
	I1124 13:57:10.309035  573633 out.go:179] * Using the docker driver based on user configuration
	I1124 13:57:10.310000  573633 start.go:309] selected driver: docker
	I1124 13:57:10.310014  573633 start.go:927] validating driver "docker" against <nil>
	I1124 13:57:10.310024  573633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:57:10.310561  573633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:57:10.368837  573633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:57:10.359083058 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:57:10.369009  573633 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:57:10.369221  573633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:57:10.370583  573633 out.go:179] * Using Docker driver with root privileges
	I1124 13:57:10.371577  573633 cni.go:84] Creating CNI manager for ""
	I1124 13:57:10.371643  573633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:10.371653  573633 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:57:10.371722  573633 start.go:353] cluster config:
	{Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:57:10.372825  573633 out.go:179] * Starting "no-preload-495729" primary control-plane node in "no-preload-495729" cluster
	I1124 13:57:10.373834  573633 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:57:10.374930  573633 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:57:10.375871  573633 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:57:10.375926  573633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:57:10.375971  573633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/config.json ...
	I1124 13:57:10.376012  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/config.json: {Name:mke1a0c7d43d3d88b3c393226f430e80d17dba2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:10.376198  573633 cache.go:107] acquiring lock: {Name:mk764472169a1e016ae63c0caff778e680c6cc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376234  573633 cache.go:107] acquiring lock: {Name:mk669cb175129cf687c7e25066832b47953691e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376240  573633 cache.go:107] acquiring lock: {Name:mka0650b538fb4091b2e54c68f59570306a77fce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376212  573633 cache.go:107] acquiring lock: {Name:mk5f01751f9e61bc354dc5d1166bb5f82b537ba6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376351  573633 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:57:10.376347  573633 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:10.376301  573633 cache.go:107] acquiring lock: {Name:mkcfb1dbf2a96e162ab77a7a3e525cb4ab2b83eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376292  573633 cache.go:107] acquiring lock: {Name:mk0942bbb6bc7b396b0ef16d0367e14ae5995fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376213  573633 cache.go:107] acquiring lock: {Name:mk758ac789f0a6c975e003d2ce1360b045d19bd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376418  573633 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:10.376194  573633 cache.go:107] acquiring lock: {Name:mka7c11330b71ddccabe0a28536b2929e10c275d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376577  573633 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:10.376589  573633 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:10.376631  573633 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:10.376659  573633 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:10.376668  573633 cache.go:115] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 13:57:10.376680  573633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 510.198µs
	I1124 13:57:10.376697  573633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 13:57:10.377470  573633 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:57:10.377537  573633 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:10.377547  573633 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:10.377685  573633 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:10.377723  573633 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:10.377732  573633 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:10.377704  573633 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:10.396669  573633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:57:10.396687  573633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:57:10.396707  573633 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:57:10.396756  573633 start.go:360] acquireMachinesLock for no-preload-495729: {Name:mk2b7a8448b6c656ea268c32a99c11369d347825 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.396846  573633 start.go:364] duration metric: took 70.67µs to acquireMachinesLock for "no-preload-495729"
	I1124 13:57:10.396873  573633 start.go:93] Provisioning new machine with config: &{Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:57:10.396969  573633 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:57:09.081017  571407 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-551674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.112727594s)
	I1124 13:57:09.081055  571407 kic.go:203] duration metric: took 5.112892012s to extract preloaded images to volume ...
	W1124 13:57:09.081163  571407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:57:09.081208  571407 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:57:09.081265  571407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:57:09.142992  571407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-551674 --name old-k8s-version-551674 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-551674 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-551674 --network old-k8s-version-551674 --ip 192.168.94.2 --volume old-k8s-version-551674:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:57:09.454604  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Running}}
	I1124 13:57:09.472872  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:09.491970  571407 cli_runner.go:164] Run: docker exec old-k8s-version-551674 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:57:09.541167  571407 oci.go:144] the created container "old-k8s-version-551674" has a running status.
	I1124 13:57:09.541193  571407 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa...
	I1124 13:57:09.590008  571407 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:57:09.619369  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:09.638216  571407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:57:09.638235  571407 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-551674 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:57:09.680942  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:09.702755  571407 machine.go:94] provisionDockerMachine start ...
	I1124 13:57:09.702846  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:09.725575  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:09.726004  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:09.726027  571407 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:57:09.726816  571407 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49568->127.0.0.1:33428: read: connection reset by peer
	I1124 13:57:12.869504  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-551674
	
	I1124 13:57:12.869547  571407 ubuntu.go:182] provisioning hostname "old-k8s-version-551674"
	I1124 13:57:12.869619  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:12.887539  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.887814  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:12.887829  571407 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-551674 && echo "old-k8s-version-551674" | sudo tee /etc/hostname
	I1124 13:57:13.040135  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-551674
	
	I1124 13:57:13.040227  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.059060  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:13.059344  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:13.059382  571407 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-551674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-551674/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-551674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:57:10.399195  573633 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:57:10.399402  573633 start.go:159] libmachine.API.Create for "no-preload-495729" (driver="docker")
	I1124 13:57:10.399436  573633 client.go:173] LocalClient.Create starting
	I1124 13:57:10.399495  573633 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:57:10.399537  573633 main.go:143] libmachine: Decoding PEM data...
	I1124 13:57:10.399566  573633 main.go:143] libmachine: Parsing certificate...
	I1124 13:57:10.399624  573633 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:57:10.399652  573633 main.go:143] libmachine: Decoding PEM data...
	I1124 13:57:10.399671  573633 main.go:143] libmachine: Parsing certificate...
	I1124 13:57:10.400028  573633 cli_runner.go:164] Run: docker network inspect no-preload-495729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:57:10.416945  573633 cli_runner.go:211] docker network inspect no-preload-495729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:57:10.417009  573633 network_create.go:284] running [docker network inspect no-preload-495729] to gather additional debugging logs...
	I1124 13:57:10.417030  573633 cli_runner.go:164] Run: docker network inspect no-preload-495729
	W1124 13:57:10.431256  573633 cli_runner.go:211] docker network inspect no-preload-495729 returned with exit code 1
	I1124 13:57:10.431278  573633 network_create.go:287] error running [docker network inspect no-preload-495729]: docker network inspect no-preload-495729: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-495729 not found
	I1124 13:57:10.431291  573633 network_create.go:289] output of [docker network inspect no-preload-495729]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-495729 not found
	
	** /stderr **
	I1124 13:57:10.431357  573633 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:57:10.447792  573633 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 13:57:10.448788  573633 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 13:57:10.449582  573633 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 13:57:10.450860  573633 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-283ea71f66a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:70:12:a2:88:dd} reservation:<nil>}
	I1124 13:57:10.451371  573633 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6303f2fb88a2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:76:39:35:0d:14:96} reservation:<nil>}
	I1124 13:57:10.451870  573633 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-584350b1ae00 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:72:e5:2a:e9:2d:0e} reservation:<nil>}
	I1124 13:57:10.452479  573633 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c81450}
	I1124 13:57:10.452501  573633 network_create.go:124] attempt to create docker network no-preload-495729 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:57:10.452539  573633 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-495729 no-preload-495729
	I1124 13:57:10.500646  573633 network_create.go:108] docker network no-preload-495729 192.168.103.0/24 created
	I1124 13:57:10.500671  573633 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-495729" container
	I1124 13:57:10.500737  573633 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:57:10.517258  573633 cli_runner.go:164] Run: docker volume create no-preload-495729 --label name.minikube.sigs.k8s.io=no-preload-495729 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:57:10.533225  573633 oci.go:103] Successfully created a docker volume no-preload-495729
	I1124 13:57:10.533293  573633 cli_runner.go:164] Run: docker run --rm --name no-preload-495729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-495729 --entrypoint /usr/bin/test -v no-preload-495729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:57:10.538879  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:57:10.543489  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:57:10.551718  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:57:10.557314  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:57:10.569241  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:57:10.581346  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:57:10.586504  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:57:10.672583  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 13:57:10.672619  573633 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 296.383762ms
	I1124 13:57:10.672637  573633 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 13:57:10.949368  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 13:57:10.949394  573633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 573.217784ms
	I1124 13:57:10.949405  573633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 13:57:10.964675  573633 oci.go:107] Successfully prepared a docker volume no-preload-495729
	I1124 13:57:10.964718  573633 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1124 13:57:10.964794  573633 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:57:10.964821  573633 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:57:10.964859  573633 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:57:11.019986  573633 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-495729 --name no-preload-495729 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-495729 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-495729 --network no-preload-495729 --ip 192.168.103.2 --volume no-preload-495729:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:57:11.319016  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Running}}
	I1124 13:57:11.336152  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:11.352975  573633 cli_runner.go:164] Run: docker exec no-preload-495729 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:57:11.396927  573633 oci.go:144] the created container "no-preload-495729" has a running status.
	I1124 13:57:11.396962  573633 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa...
	I1124 13:57:11.732240  573633 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:57:11.745457  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 13:57:11.745486  573633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.369244968s
	I1124 13:57:11.745503  573633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 13:57:11.760356  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:11.782516  573633 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:57:11.782539  573633 kic_runner.go:114] Args: [docker exec --privileged no-preload-495729 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:57:11.834128  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:11.857036  573633 machine.go:94] provisionDockerMachine start ...
	I1124 13:57:11.857148  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:11.878158  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:11.878519  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:11.878554  573633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:57:11.916024  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 13:57:11.916057  573633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.539865062s
	I1124 13:57:11.916074  573633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 13:57:11.965704  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 13:57:11.965740  573633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.589506659s
	I1124 13:57:11.965756  573633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 13:57:12.041467  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-495729
	
	I1124 13:57:12.041497  573633 ubuntu.go:182] provisioning hostname "no-preload-495729"
	I1124 13:57:12.041569  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.062646  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.062900  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:12.062921  573633 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-495729 && echo "no-preload-495729" | sudo tee /etc/hostname
	I1124 13:57:12.105540  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 13:57:12.105568  573633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.729334606s
	I1124 13:57:12.105583  573633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 13:57:12.215707  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-495729
	
	I1124 13:57:12.215782  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.232680  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.232988  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:12.233021  573633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-495729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-495729/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-495729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:57:12.275167  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 13:57:12.275194  573633 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.898984854s
	I1124 13:57:12.275209  573633 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 13:57:12.275229  573633 cache.go:87] Successfully saved all images to host disk.
	I1124 13:57:12.375110  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:57:12.375136  573633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:57:12.375161  573633 ubuntu.go:190] setting up certificates
	I1124 13:57:12.375184  573633 provision.go:84] configureAuth start
	I1124 13:57:12.375247  573633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-495729
	I1124 13:57:12.391664  573633 provision.go:143] copyHostCerts
	I1124 13:57:12.391728  573633 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:57:12.391743  573633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:57:12.391811  573633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:57:12.391940  573633 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:57:12.391953  573633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:57:12.391995  573633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:57:12.392079  573633 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:57:12.392089  573633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:57:12.392126  573633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:57:12.392197  573633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.no-preload-495729 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-495729]
	I1124 13:57:12.455602  573633 provision.go:177] copyRemoteCerts
	I1124 13:57:12.455660  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:57:12.455713  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.472068  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:12.572699  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:57:12.591571  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:57:12.609715  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:57:12.626247  573633 provision.go:87] duration metric: took 251.047769ms to configureAuth
	I1124 13:57:12.626269  573633 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:57:12.626406  573633 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:12.626497  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.643097  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.643297  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:12.643311  573633 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:57:12.926488  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:57:12.926513  573633 machine.go:97] duration metric: took 1.069448048s to provisionDockerMachine
	I1124 13:57:12.926526  573633 client.go:176] duration metric: took 2.527082252s to LocalClient.Create
	I1124 13:57:12.926542  573633 start.go:167] duration metric: took 2.527140782s to libmachine.API.Create "no-preload-495729"
	I1124 13:57:12.926551  573633 start.go:293] postStartSetup for "no-preload-495729" (driver="docker")
	I1124 13:57:12.926563  573633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:57:12.926625  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:57:12.926665  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.945012  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.045606  573633 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:57:13.049018  573633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:57:13.049043  573633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:57:13.049054  573633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:57:13.049104  573633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:57:13.049186  573633 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:57:13.049301  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:57:13.056718  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:13.075452  573633 start.go:296] duration metric: took 148.886485ms for postStartSetup
	I1124 13:57:13.075793  573633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-495729
	I1124 13:57:13.092598  573633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/config.json ...
	I1124 13:57:13.092813  573633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:57:13.092858  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:13.109348  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.205859  573633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:57:13.210582  573633 start.go:128] duration metric: took 2.813599257s to createHost
	I1124 13:57:13.210607  573633 start.go:83] releasing machines lock for "no-preload-495729", held for 2.813746179s
	I1124 13:57:13.210676  573633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-495729
	I1124 13:57:13.228698  573633 ssh_runner.go:195] Run: cat /version.json
	I1124 13:57:13.228742  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:13.228818  573633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:57:13.228905  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:13.247411  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.247812  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.343531  573633 ssh_runner.go:195] Run: systemctl --version
	I1124 13:57:13.397825  573633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:57:13.433057  573633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:57:13.437611  573633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:57:13.437678  573633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:57:13.461709  573633 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:57:13.461730  573633 start.go:496] detecting cgroup driver to use...
	I1124 13:57:13.461764  573633 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:57:13.461815  573633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:57:13.477030  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:57:13.488372  573633 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:57:13.488423  573633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:57:13.504261  573633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:57:13.522485  573633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:57:13.616540  573633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:57:13.708748  573633 docker.go:234] disabling docker service ...
	I1124 13:57:13.708810  573633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:57:13.727572  573633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:57:13.740700  573633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:57:13.832100  573633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:57:13.927324  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:57:13.939714  573633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:57:13.953474  573633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:57:13.953536  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.962954  573633 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:57:13.963008  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.971194  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.979518  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.987857  573633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:57:13.995322  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.003141  573633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.015247  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.023120  573633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:57:14.029999  573633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:57:14.037215  573633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:14.120958  573633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:57:14.583995  573633 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:57:14.584071  573633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:57:14.587936  573633 start.go:564] Will wait 60s for crictl version
	I1124 13:57:14.588004  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.591330  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:57:14.614989  573633 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:57:14.615057  573633 ssh_runner.go:195] Run: crio --version
	I1124 13:57:14.644346  573633 ssh_runner.go:195] Run: crio --version
	I1124 13:57:14.680867  573633 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:57:14.682195  573633 cli_runner.go:164] Run: docker network inspect no-preload-495729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:57:14.698662  573633 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 13:57:14.702751  573633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:14.712750  573633 kubeadm.go:884] updating cluster {Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:57:14.712900  573633 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:57:14.712952  573633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:57:14.738848  573633 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:57:14.738868  573633 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 13:57:14.738947  573633 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.738965  573633 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:14.738977  573633 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.738981  573633 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:57:14.738953  573633 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.738979  573633 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.739006  573633 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.738991  573633 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.740266  573633 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.740283  573633 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:57:14.740283  573633 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.740266  573633 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.740266  573633 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.740323  573633 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.740323  573633 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:14.740355  573633 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.869183  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.872615  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.876185  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.887969  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.898500  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.910588  573633 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 13:57:14.910638  573633 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.910685  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.913079  573633 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 13:57:14.913122  573633 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.913171  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.915095  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.917783  573633 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 13:57:14.917821  573633 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.917858  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.930620  573633 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 13:57:14.930685  573633 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.930728  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.940096  573633 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 13:57:14.940111  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.940133  573633 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.940164  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.940183  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.953091  573633 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 13:57:14.953131  573633 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.953162  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.953185  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.953169  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.970208  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.970208  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.970268  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.988704  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.989985  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.989995  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:15.005991  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:15.006122  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:15.006264  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:15.025528  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:15.025615  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:15.027800  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:15.042874  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:57:15.042974  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:57:15.043015  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:57:15.043070  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:15.043087  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:57:15.059703  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:57:15.059785  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:57:15.059795  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:57:15.059853  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1124 13:57:15.059877  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:15.059877  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1124 13:57:15.059918  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:57:15.059913  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1124 13:57:15.059937  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1124 13:57:15.091430  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1124 13:57:15.091460  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1124 13:57:15.092766  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:57:15.092861  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:57:15.092912  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1124 13:57:15.092958  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1124 13:57:15.093610  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:57:15.093696  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:57:15.196219  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.216101  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1124 13:57:15.216131  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1124 13:57:15.216143  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1124 13:57:15.216159  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1124 13:57:13.519562  549693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073756563s)
	W1124 13:57:13.519597  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:57:13.519606  549693 logs.go:123] Gathering logs for kube-apiserver [281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3] ...
	I1124 13:57:13.519620  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3"
	I1124 13:57:13.558382  549693 logs.go:123] Gathering logs for kube-controller-manager [1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9] ...
	I1124 13:57:13.558419  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9"
	I1124 13:57:13.589387  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:13.589418  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:13.639234  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:13.639260  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:13.710654  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:13.710684  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:13.742326  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:13.742350  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:13.793435  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:13.793464  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:13.822589  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:13.822622  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:13.853301  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:13.853328  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:13.202350  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:57:13.202379  571407 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:57:13.202426  571407 ubuntu.go:190] setting up certificates
	I1124 13:57:13.202439  571407 provision.go:84] configureAuth start
	I1124 13:57:13.202498  571407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551674
	I1124 13:57:13.221012  571407 provision.go:143] copyHostCerts
	I1124 13:57:13.221073  571407 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:57:13.221087  571407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:57:13.221151  571407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:57:13.221273  571407 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:57:13.221284  571407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:57:13.221318  571407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:57:13.221407  571407 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:57:13.221417  571407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:57:13.221447  571407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:57:13.221524  571407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-551674 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-551674]
	I1124 13:57:13.398720  571407 provision.go:177] copyRemoteCerts
	I1124 13:57:13.398770  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:57:13.398802  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.416935  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:13.518029  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:57:13.538168  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:57:13.564571  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:57:13.583942  571407 provision.go:87] duration metric: took 381.485915ms to configureAuth
	I1124 13:57:13.583973  571407 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:57:13.584161  571407 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:57:13.584302  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.602852  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:13.603185  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:13.603215  571407 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:57:13.914557  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:57:13.914589  571407 machine.go:97] duration metric: took 4.211807799s to provisionDockerMachine
	I1124 13:57:13.914600  571407 client.go:176] duration metric: took 10.496435799s to LocalClient.Create
	I1124 13:57:13.914621  571407 start.go:167] duration metric: took 10.496496006s to libmachine.API.Create "old-k8s-version-551674"
	I1124 13:57:13.914630  571407 start.go:293] postStartSetup for "old-k8s-version-551674" (driver="docker")
	I1124 13:57:13.914643  571407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:57:13.914705  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:57:13.914750  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.932579  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.034959  571407 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:57:14.038500  571407 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:57:14.038524  571407 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:57:14.038534  571407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:57:14.038589  571407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:57:14.038685  571407 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:57:14.038849  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:57:14.046043  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:14.066850  571407 start.go:296] duration metric: took 152.203471ms for postStartSetup
	I1124 13:57:14.067252  571407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551674
	I1124 13:57:14.086950  571407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/config.json ...
	I1124 13:57:14.087194  571407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:57:14.087267  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:14.103520  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.200726  571407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:57:14.205096  571407 start.go:128] duration metric: took 10.788537853s to createHost
	I1124 13:57:14.205121  571407 start.go:83] releasing machines lock for "old-k8s-version-551674", held for 10.788676619s
	I1124 13:57:14.205194  571407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551674
	I1124 13:57:14.222491  571407 ssh_runner.go:195] Run: cat /version.json
	I1124 13:57:14.222545  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:14.222561  571407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:57:14.222642  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:14.239699  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.240606  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.407100  571407 ssh_runner.go:195] Run: systemctl --version
	I1124 13:57:14.413662  571407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:57:14.447074  571407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:57:14.451669  571407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:57:14.451734  571407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:57:14.480870  571407 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:57:14.480911  571407 start.go:496] detecting cgroup driver to use...
	I1124 13:57:14.480950  571407 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:57:14.481002  571407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:57:14.498242  571407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:57:14.511398  571407 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:57:14.511446  571407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:57:14.527283  571407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:57:14.545151  571407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:57:14.627820  571407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:57:14.723386  571407 docker.go:234] disabling docker service ...
	I1124 13:57:14.723448  571407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:57:14.743576  571407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:57:14.755922  571407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:57:14.840061  571407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:57:14.943460  571407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:57:14.959320  571407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:57:14.977831  571407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 13:57:14.977911  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.992400  571407 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:57:14.992460  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.005051  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.017430  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.031626  571407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:57:15.044305  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.056322  571407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.075714  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.086756  571407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:57:15.095024  571407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:57:15.102382  571407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:15.185193  571407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:57:15.346750  571407 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:57:15.346825  571407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:57:15.352113  571407 start.go:564] Will wait 60s for crictl version
	I1124 13:57:15.352172  571407 ssh_runner.go:195] Run: which crictl
	I1124 13:57:15.356702  571407 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:57:15.389387  571407 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:57:15.389481  571407 ssh_runner.go:195] Run: crio --version
	I1124 13:57:15.429438  571407 ssh_runner.go:195] Run: crio --version
	I1124 13:57:15.473653  571407 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 13:57:15.474976  571407 cli_runner.go:164] Run: docker network inspect old-k8s-version-551674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:57:15.499863  571407 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 13:57:15.505506  571407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:15.519694  571407 kubeadm.go:884] updating cluster {Name:old-k8s-version-551674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-551674 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:57:15.519846  571407 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 13:57:15.519915  571407 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:57:15.564015  571407 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:57:15.564042  571407 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:57:15.564110  571407 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:57:15.593925  571407 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:57:15.593954  571407 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:57:15.593964  571407 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1124 13:57:15.594064  571407 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-551674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-551674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:57:15.594136  571407 ssh_runner.go:195] Run: crio config
	I1124 13:57:15.664717  571407 cni.go:84] Creating CNI manager for ""
	I1124 13:57:15.664740  571407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:15.664758  571407 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:57:15.664783  571407 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-551674 NodeName:old-k8s-version-551674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:57:15.664933  571407 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-551674"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:57:15.664996  571407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:57:15.673875  571407 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:57:15.673953  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:57:15.685520  571407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 13:57:15.701279  571407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:57:15.716241  571407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1124 13:57:15.728309  571407 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:57:15.731923  571407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:15.741828  571407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:15.820551  571407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:15.841024  571407 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674 for IP: 192.168.94.2
	I1124 13:57:15.841046  571407 certs.go:195] generating shared ca certs ...
	I1124 13:57:15.841065  571407 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.841226  571407 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:57:15.841291  571407 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:57:15.841305  571407 certs.go:257] generating profile certs ...
	I1124 13:57:15.841368  571407 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.key
	I1124 13:57:15.841382  571407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt with IP's: []
	I1124 13:57:15.913400  571407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt ...
	I1124 13:57:15.913425  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: {Name:mk49b7f1d5ae517a4372141da3d88bc1e1a6f1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.913612  571407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.key ...
	I1124 13:57:15.913629  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.key: {Name:mk211dfe7ae53822a5305fc5bb636e978477bda0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.913773  571407 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2
	I1124 13:57:15.913797  571407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 13:57:15.975994  571407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2 ...
	I1124 13:57:15.976014  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2: {Name:mk707ad7d5fc3abfd025bfbdb2ef4548d9633c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.976163  571407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2 ...
	I1124 13:57:15.976184  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2: {Name:mkb6618ed8dd343e3fa22300407a727e0fdb5dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.976296  571407 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt
	I1124 13:57:15.976370  571407 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key
	I1124 13:57:15.976423  571407 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key
	I1124 13:57:15.976437  571407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt with IP's: []
	I1124 13:57:16.029865  571407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt ...
	I1124 13:57:16.029896  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt: {Name:mke8c630c68d97aa112356eb2a1d2857d817178e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:16.030077  571407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key ...
	I1124 13:57:16.030095  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key: {Name:mkf3bd1fd01857aa08eceac6ffacefd52aca0f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:16.030315  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 13:57:16.030353  571407 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 13:57:16.030363  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:57:16.030387  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:57:16.030419  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:57:16.030445  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:57:16.030486  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:16.031086  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:57:16.048988  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:57:16.065309  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:57:16.082398  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:57:16.098911  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:57:16.115120  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:57:16.131401  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:57:16.148615  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:57:16.199862  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 13:57:16.260944  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:57:16.277737  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 13:57:16.294173  571407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:57:16.306075  571407 ssh_runner.go:195] Run: openssl version
	I1124 13:57:16.311813  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 13:57:16.319518  571407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 13:57:16.322880  571407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 13:57:16.322941  571407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 13:57:16.356357  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:57:16.364146  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:57:16.372066  571407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:16.375542  571407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:16.375580  571407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:16.411251  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:57:16.419362  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 13:57:16.427222  571407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 13:57:16.430760  571407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 13:57:16.430801  571407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 13:57:16.465249  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 13:57:16.473859  571407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:57:16.477268  571407 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:57:16.477337  571407 kubeadm.go:401] StartCluster: {Name:old-k8s-version-551674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-551674 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:57:16.477426  571407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:57:16.477483  571407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:57:16.506759  571407 cri.go:89] found id: ""
	I1124 13:57:16.506814  571407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:57:16.515866  571407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:57:16.523775  571407 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:57:16.523826  571407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:57:16.532502  571407 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:57:16.532522  571407 kubeadm.go:158] found existing configuration files:
	
	I1124 13:57:16.532570  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:57:16.540467  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:57:16.540518  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:57:16.548607  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:57:16.556445  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:57:16.556494  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:57:16.563968  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:57:16.572359  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:57:16.572430  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:57:16.580125  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:57:16.588611  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:57:16.588658  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:57:16.596399  571407 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:57:16.666278  571407 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:57:16.666356  571407 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:57:16.712711  571407 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:57:16.712838  571407 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:57:16.712920  571407 kubeadm.go:319] OS: Linux
	I1124 13:57:16.712997  571407 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:57:16.713075  571407 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:57:16.713159  571407 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:57:16.713238  571407 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:57:16.713312  571407 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:57:16.713385  571407 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:57:16.713468  571407 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:57:16.713532  571407 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:57:16.802858  571407 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:57:16.803038  571407 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:57:16.803167  571407 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:57:16.980343  571407 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:57:16.983596  571407 out.go:252]   - Generating certificates and keys ...
	I1124 13:57:16.983725  571407 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:57:16.983863  571407 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:57:17.243683  571407 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:57:17.508216  571407 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:57:17.737530  571407 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:57:17.797058  571407 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:57:17.933081  571407 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:57:17.933277  571407 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-551674] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:57:17.987172  571407 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:57:17.987378  571407 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-551674] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:57:18.257965  571407 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:57:18.503087  571407 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:57:18.590801  571407 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:57:18.590928  571407 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:57:18.733783  571407 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:57:18.979974  571407 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:57:19.160310  571407 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:57:19.303006  571407 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:57:19.303797  571407 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:57:19.311391  571407 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:57:15.291776  573633 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 13:57:15.291833  573633 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.291899  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:15.338903  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.395563  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.434242  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.470602  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 13:57:15.470716  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:57:15.479881  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 13:57:15.479980  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 13:57:15.500126  573633 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:57:15.500187  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:57:15.571122  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 13:57:17.473564  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.973351982s)
	I1124 13:57:17.473601  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 13:57:17.473621  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:57:17.473666  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:57:17.473690  573633 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1: (1.902531252s)
	I1124 13:57:17.473744  573633 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 13:57:17.473779  573633 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 13:57:17.473824  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:18.583660  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.109964125s)
	I1124 13:57:18.583700  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 13:57:18.583701  573633 ssh_runner.go:235] Completed: which crictl: (1.109851679s)
	I1124 13:57:18.583727  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:57:18.583768  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:57:18.583779  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:57:19.749731  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.165923358s)
	I1124 13:57:19.749759  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 13:57:19.749760  573633 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.165962721s)
	I1124 13:57:19.749784  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:57:19.749823  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:57:19.749828  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:57:16.375925  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:16.792799  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33976->192.168.76.2:8443: read: connection reset by peer
	I1124 13:57:16.792871  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:16.792984  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:16.829616  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:16.829644  549693 cri.go:89] found id: "281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3"
	I1124 13:57:16.829651  549693 cri.go:89] found id: ""
	I1124 13:57:16.829661  549693 logs.go:282] 2 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073 281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3]
	I1124 13:57:16.829719  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:16.834625  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:16.838722  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:16.838805  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:16.871036  549693 cri.go:89] found id: ""
	I1124 13:57:16.871065  549693 logs.go:282] 0 containers: []
	W1124 13:57:16.871076  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:16.871084  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:16.871143  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:16.901221  549693 cri.go:89] found id: ""
	I1124 13:57:16.901254  549693 logs.go:282] 0 containers: []
	W1124 13:57:16.901266  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:16.901274  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:16.901340  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:16.935298  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:16.935332  549693 cri.go:89] found id: ""
	I1124 13:57:16.935344  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:16.935578  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:16.940815  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:16.940883  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:16.972627  549693 cri.go:89] found id: ""
	I1124 13:57:16.972656  549693 logs.go:282] 0 containers: []
	W1124 13:57:16.972668  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:16.972676  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:16.972742  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:17.003837  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:17.003860  549693 cri.go:89] found id: "1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9"
	I1124 13:57:17.003866  549693 cri.go:89] found id: ""
	I1124 13:57:17.003877  549693 logs.go:282] 2 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121 1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9]
	I1124 13:57:17.003953  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:17.008578  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:17.012343  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:17.012403  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:17.038718  549693 cri.go:89] found id: ""
	I1124 13:57:17.038739  549693 logs.go:282] 0 containers: []
	W1124 13:57:17.038748  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:17.038755  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:17.038803  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:17.065817  549693 cri.go:89] found id: ""
	I1124 13:57:17.065838  549693 logs.go:282] 0 containers: []
	W1124 13:57:17.065848  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:17.065865  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:17.065878  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:17.098390  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:17.098421  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:17.169632  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:17.169672  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:17.190253  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:17.190286  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:17.252467  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:17.252489  549693 logs.go:123] Gathering logs for kube-apiserver [281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3] ...
	I1124 13:57:17.252506  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3"
	I1124 13:57:17.287708  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:17.287753  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:17.334910  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:17.334943  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:17.373257  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:17.373306  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:17.436437  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:17.436472  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:17.463545  549693 logs.go:123] Gathering logs for kube-controller-manager [1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9] ...
	I1124 13:57:17.463571  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9"
	I1124 13:57:19.996015  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:19.996443  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:19.996503  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:19.996557  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:20.024690  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:20.024711  549693 cri.go:89] found id: ""
	I1124 13:57:20.024721  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:20.024773  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:20.028789  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:20.028848  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:20.054154  549693 cri.go:89] found id: ""
	I1124 13:57:20.054181  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.054192  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:20.054200  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:20.054241  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:20.079287  549693 cri.go:89] found id: ""
	I1124 13:57:20.079313  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.079325  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:20.079332  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:20.079376  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:20.105401  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:20.105423  549693 cri.go:89] found id: ""
	I1124 13:57:20.105432  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:20.105487  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:20.109416  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:20.109467  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:20.135667  549693 cri.go:89] found id: ""
	I1124 13:57:20.135694  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.135704  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:20.135711  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:20.135763  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:20.162305  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:20.162327  549693 cri.go:89] found id: ""
	I1124 13:57:20.162337  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:20.162392  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:20.166315  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:20.166375  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:20.191606  549693 cri.go:89] found id: ""
	I1124 13:57:20.191629  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.191639  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:20.191646  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:20.191703  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:20.218680  549693 cri.go:89] found id: ""
	I1124 13:57:20.218708  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.218718  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:20.218730  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:20.218743  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:19.312627  571407 out.go:252]   - Booting up control plane ...
	I1124 13:57:19.312753  571407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:57:19.312868  571407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:57:19.313521  571407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:57:19.329510  571407 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:57:19.330569  571407 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:57:19.330625  571407 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:57:19.439952  571407 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 13:57:23.942581  571407 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502775 seconds
	I1124 13:57:23.942780  571407 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:57:23.953909  571407 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:57:24.473800  571407 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:57:24.474103  571407 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-551674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:57:24.984809  571407 kubeadm.go:319] [bootstrap-token] Using token: ys6b1a.2xnctodtlxr4cy0e
	I1124 13:57:21.241643  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.491787935s)
	I1124 13:57:21.241680  573633 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.491822521s)
	I1124 13:57:21.241776  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:57:21.241687  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 13:57:21.241883  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:57:21.241932  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:57:21.274551  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:57:21.274653  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 13:57:22.542362  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.300401s)
	I1124 13:57:22.542392  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 13:57:22.542414  573633 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:57:22.542456  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:57:22.542524  573633 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: (1.267857434s)
	I1124 13:57:22.542541  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 13:57:22.542555  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 13:57:23.087121  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 13:57:23.087168  573633 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:57:23.087217  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:57:20.263486  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:20.263522  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:20.296182  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:20.296210  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:20.382239  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:20.382276  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:20.399521  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:20.399550  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:20.467474  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:20.467493  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:20.467506  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:20.503293  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:20.503324  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:20.551739  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:20.551776  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:23.080967  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:23.081424  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:23.081482  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:23.081526  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:23.108986  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:23.109013  549693 cri.go:89] found id: ""
	I1124 13:57:23.109024  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:23.109082  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:23.113005  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:23.113071  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:23.141535  549693 cri.go:89] found id: ""
	I1124 13:57:23.141567  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.141577  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:23.141585  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:23.141645  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:23.168572  549693 cri.go:89] found id: ""
	I1124 13:57:23.168599  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.168610  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:23.168618  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:23.168680  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:23.196831  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:23.196857  549693 cri.go:89] found id: ""
	I1124 13:57:23.196868  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:23.196938  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:23.200811  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:23.200872  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:23.229863  549693 cri.go:89] found id: ""
	I1124 13:57:23.229915  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.229926  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:23.229937  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:23.229995  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:23.259650  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:23.259669  549693 cri.go:89] found id: ""
	I1124 13:57:23.259679  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:23.259735  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:23.263487  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:23.263539  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:23.289669  549693 cri.go:89] found id: ""
	I1124 13:57:23.289693  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.289706  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:23.289713  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:23.289754  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:23.315479  549693 cri.go:89] found id: ""
	I1124 13:57:23.315503  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.315512  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:23.315524  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:23.315541  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:23.344882  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:23.344923  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:23.416039  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:23.416065  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:23.432805  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:23.432837  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:23.502478  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:23.502506  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:23.502524  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:23.543728  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:23.543766  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:23.600218  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:23.600255  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:23.633238  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:23.633273  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:24.986140  571407 out.go:252]   - Configuring RBAC rules ...
	I1124 13:57:24.986305  571407 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:57:24.990271  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:57:24.996326  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:57:25.003126  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:57:25.005633  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:57:25.008373  571407 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:57:25.018954  571407 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:57:25.212181  571407 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:57:25.393851  571407 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:57:25.395251  571407 kubeadm.go:319] 
	I1124 13:57:25.395372  571407 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:57:25.395395  571407 kubeadm.go:319] 
	I1124 13:57:25.395546  571407 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:57:25.395564  571407 kubeadm.go:319] 
	I1124 13:57:25.395608  571407 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:57:25.395707  571407 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:57:25.395801  571407 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:57:25.395819  571407 kubeadm.go:319] 
	I1124 13:57:25.395922  571407 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:57:25.395932  571407 kubeadm.go:319] 
	I1124 13:57:25.396002  571407 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:57:25.396011  571407 kubeadm.go:319] 
	I1124 13:57:25.396083  571407 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:57:25.396202  571407 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:57:25.396309  571407 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:57:25.396319  571407 kubeadm.go:319] 
	I1124 13:57:25.396441  571407 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:57:25.396559  571407 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:57:25.396589  571407 kubeadm.go:319] 
	I1124 13:57:25.396707  571407 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ys6b1a.2xnctodtlxr4cy0e \
	I1124 13:57:25.396853  571407 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:57:25.396901  571407 kubeadm.go:319] 	--control-plane 
	I1124 13:57:25.396918  571407 kubeadm.go:319] 
	I1124 13:57:25.397034  571407 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:57:25.397044  571407 kubeadm.go:319] 
	I1124 13:57:25.397153  571407 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ys6b1a.2xnctodtlxr4cy0e \
	I1124 13:57:25.397277  571407 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:57:25.399456  571407 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:57:25.399592  571407 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:57:25.399632  571407 cni.go:84] Creating CNI manager for ""
	I1124 13:57:25.399650  571407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:25.401778  571407 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:57:25.402966  571407 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:57:25.407483  571407 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 13:57:25.407502  571407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:57:25.421315  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:57:26.677665  571407 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.256314192s)
	I1124 13:57:26.677717  571407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:57:26.677802  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:26.678026  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-551674 minikube.k8s.io/updated_at=2025_11_24T13_57_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-551674 minikube.k8s.io/primary=true
	I1124 13:57:26.764260  571407 ops.go:34] apiserver oom_adj: -16
	I1124 13:57:26.764291  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:27.264377  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:27.764385  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:26.800759  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.713511269s)
	I1124 13:57:26.800796  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 13:57:26.800822  573633 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 13:57:26.800864  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 13:57:26.915336  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 13:57:26.915375  573633 cache_images.go:125] Successfully loaded all cached images
	I1124 13:57:26.915380  573633 cache_images.go:94] duration metric: took 12.176501438s to LoadCachedImages
	I1124 13:57:26.915392  573633 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1124 13:57:26.915482  573633 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-495729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:57:26.915548  573633 ssh_runner.go:195] Run: crio config
	I1124 13:57:26.961598  573633 cni.go:84] Creating CNI manager for ""
	I1124 13:57:26.961625  573633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:26.961644  573633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:57:26.961673  573633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-495729 NodeName:no-preload-495729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:57:26.961822  573633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-495729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:57:26.961912  573633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:57:26.970397  573633 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 13:57:26.970452  573633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 13:57:26.978753  573633 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 13:57:26.978848  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 13:57:26.978857  573633 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 13:57:26.978903  573633 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 13:57:26.982826  573633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 13:57:26.982850  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 13:57:27.766717  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:57:27.780138  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 13:57:27.784047  573633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 13:57:27.784079  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 13:57:27.822694  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 13:57:27.830525  573633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 13:57:27.830561  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 13:57:28.094598  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:57:28.102871  573633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 13:57:28.115287  573633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:57:28.129863  573633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1124 13:57:28.142982  573633 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:57:28.146672  573633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:28.155948  573633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:28.236448  573633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:28.260867  573633 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729 for IP: 192.168.103.2
	I1124 13:57:28.260918  573633 certs.go:195] generating shared ca certs ...
	I1124 13:57:28.260936  573633 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.261108  573633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:57:28.261162  573633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:57:28.261173  573633 certs.go:257] generating profile certs ...
	I1124 13:57:28.261225  573633 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.key
	I1124 13:57:28.261239  573633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.crt with IP's: []
	I1124 13:57:28.400253  573633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.crt ...
	I1124 13:57:28.400279  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.crt: {Name:mk1bccb90b80822e2b694d0e1d16f81c17491caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.400448  573633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.key ...
	I1124 13:57:28.400461  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.key: {Name:mke310e1ee824c765c4c6b1434da5b7bb54684f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.400549  573633 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8
	I1124 13:57:28.400564  573633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 13:57:28.444920  573633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8 ...
	I1124 13:57:28.444940  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8: {Name:mkbd69e0ecab03baf64997b662fa9aff127b2c25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.445058  573633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8 ...
	I1124 13:57:28.445072  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8: {Name:mkaddef9aed1936bc049b484899750225c43f048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.445145  573633 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt
	I1124 13:57:28.445227  573633 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key
	I1124 13:57:28.445287  573633 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key
	I1124 13:57:28.445301  573633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt with IP's: []
	I1124 13:57:28.702286  573633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt ...
	I1124 13:57:28.702311  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt: {Name:mk851290cc76e1a7a35547c1a0c59d85e9313498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.702456  573633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key ...
	I1124 13:57:28.702469  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key: {Name:mk8a9c376fd5d4087cccdd45da4782aa62060990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.702668  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 13:57:28.702713  573633 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 13:57:28.702734  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:57:28.702762  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:57:28.702785  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:57:28.702807  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:57:28.702851  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:28.703492  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:57:28.721561  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:57:28.738217  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:57:28.755118  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:57:28.772487  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:57:28.790232  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:57:28.806994  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:57:28.825336  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:57:28.842370  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 13:57:28.861567  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 13:57:28.878084  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:57:28.894539  573633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:57:28.906069  573633 ssh_runner.go:195] Run: openssl version
	I1124 13:57:28.912015  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:57:28.920046  573633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:28.923491  573633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:28.923542  573633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:28.958031  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:57:28.966087  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 13:57:28.974753  573633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 13:57:28.978927  573633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 13:57:28.978990  573633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 13:57:29.016023  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 13:57:29.024271  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 13:57:29.032274  573633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 13:57:29.036019  573633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 13:57:29.036062  573633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 13:57:29.070777  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:57:29.078641  573633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:57:29.082085  573633 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:57:29.082142  573633 kubeadm.go:401] StartCluster: {Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:57:29.082213  573633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:57:29.082248  573633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:57:29.107771  573633 cri.go:89] found id: ""
	I1124 13:57:29.107827  573633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:57:29.115381  573633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:57:29.122978  573633 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:57:29.123027  573633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:57:29.130358  573633 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:57:29.130375  573633 kubeadm.go:158] found existing configuration files:
	
	I1124 13:57:29.130410  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:57:29.137885  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:57:29.137951  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:57:29.145044  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:57:29.152398  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:57:29.152440  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:57:29.159490  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:57:29.166626  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:57:29.166660  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:57:29.173444  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:57:29.180625  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:57:29.180661  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:57:29.187509  573633 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:57:29.220910  573633 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:57:29.221011  573633 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:57:29.240437  573633 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:57:29.240505  573633 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:57:29.240546  573633 kubeadm.go:319] OS: Linux
	I1124 13:57:29.240627  573633 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:57:29.240721  573633 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:57:29.240783  573633 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:57:29.240860  573633 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:57:29.240945  573633 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:57:29.241022  573633 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:57:29.241095  573633 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:57:29.241156  573633 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:57:29.300735  573633 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:57:29.300861  573633 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:57:29.301006  573633 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:57:29.316241  573633 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:57:29.318709  573633 out.go:252]   - Generating certificates and keys ...
	I1124 13:57:29.318816  573633 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:57:29.318959  573633 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:57:29.467801  573633 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:57:29.926743  573633 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:57:30.102501  573633 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:57:26.184991  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:26.185401  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:26.185464  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:26.185517  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:26.212660  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:26.212681  549693 cri.go:89] found id: ""
	I1124 13:57:26.212690  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:26.212744  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:26.216615  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:26.216674  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:26.243277  549693 cri.go:89] found id: ""
	I1124 13:57:26.243305  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.243313  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:26.243320  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:26.243381  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:26.270037  549693 cri.go:89] found id: ""
	I1124 13:57:26.270061  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.270071  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:26.270078  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:26.270135  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:26.296960  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:26.296995  549693 cri.go:89] found id: ""
	I1124 13:57:26.297007  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:26.297070  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:26.301134  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:26.301198  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:26.330601  549693 cri.go:89] found id: ""
	I1124 13:57:26.330626  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.330634  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:26.330640  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:26.330701  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:26.355988  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:26.356012  549693 cri.go:89] found id: ""
	I1124 13:57:26.356023  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:26.356072  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:26.360027  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:26.360089  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:26.386931  549693 cri.go:89] found id: ""
	I1124 13:57:26.386961  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.386970  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:26.386980  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:26.387037  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:26.413206  549693 cri.go:89] found id: ""
	I1124 13:57:26.413234  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.413246  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:26.413260  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:26.413279  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:26.458907  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:26.458939  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:26.484744  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:26.484773  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:26.528348  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:26.528379  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:26.558726  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:26.558753  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:26.630322  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:26.630353  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:26.646849  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:26.646872  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:26.728844  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:26.728867  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:26.728883  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:29.271035  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:29.271409  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:29.271465  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:29.271508  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:29.304860  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:29.304882  549693 cri.go:89] found id: ""
	I1124 13:57:29.304903  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:29.304961  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:29.309305  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:29.309368  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:29.339516  549693 cri.go:89] found id: ""
	I1124 13:57:29.339540  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.339550  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:29.339557  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:29.339620  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:29.365924  549693 cri.go:89] found id: ""
	I1124 13:57:29.365950  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.365960  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:29.365969  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:29.366026  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:29.393209  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:29.393230  549693 cri.go:89] found id: ""
	I1124 13:57:29.393241  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:29.393284  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:29.397084  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:29.397141  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:29.421881  549693 cri.go:89] found id: ""
	I1124 13:57:29.421941  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.421950  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:29.421963  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:29.422016  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:29.446504  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:29.446521  549693 cri.go:89] found id: ""
	I1124 13:57:29.446531  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:29.446579  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:29.450356  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:29.450407  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:29.476041  549693 cri.go:89] found id: ""
	I1124 13:57:29.476064  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.476074  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:29.476081  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:29.476130  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:29.502720  549693 cri.go:89] found id: ""
	I1124 13:57:29.502744  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.502754  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:29.502765  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:29.502779  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:29.556575  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:29.556597  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:29.556613  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:29.590498  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:29.590527  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:29.633876  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:29.633912  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:29.658534  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:29.658558  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:29.699288  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:29.699315  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:29.728940  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:29.728970  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:29.810491  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:29.810520  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:28.265193  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:28.764415  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:29.265156  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:29.765163  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:30.264394  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:30.765058  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:31.264584  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:31.764564  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:32.264921  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:32.764796  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:30.379570  573633 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:57:31.111350  573633 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:57:31.111560  573633 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-495729] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:57:31.266158  573633 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:57:31.266353  573633 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-495729] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:57:31.686144  573633 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:57:31.923523  573633 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:57:32.185038  573633 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:57:32.185110  573633 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:57:32.528464  573633 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:57:33.073112  573633 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:57:33.168005  573633 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:57:33.598124  573633 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:57:33.690558  573633 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:57:33.691134  573633 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:57:33.694570  573633 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:57:33.696063  573633 out.go:252]   - Booting up control plane ...
	I1124 13:57:33.696177  573633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:57:33.696280  573633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:57:33.696945  573633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:57:33.710532  573633 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:57:33.710620  573633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:57:33.716564  573633 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:57:33.716899  573633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:57:33.716980  573633 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:57:33.819935  573633 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:57:33.820045  573633 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:57:34.821074  573633 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001191289s
	I1124 13:57:34.824226  573633 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:57:34.824378  573633 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 13:57:34.824498  573633 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:57:34.824577  573633 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:57:32.331957  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:32.332463  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:32.332529  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:32.332587  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:32.360218  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:32.360238  549693 cri.go:89] found id: ""
	I1124 13:57:32.360246  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:32.360297  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:32.364109  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:32.364160  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:32.389549  549693 cri.go:89] found id: ""
	I1124 13:57:32.389572  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.389579  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:32.389585  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:32.389635  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:32.414359  549693 cri.go:89] found id: ""
	I1124 13:57:32.414383  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.414393  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:32.414401  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:32.414462  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:32.440008  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:32.440036  549693 cri.go:89] found id: ""
	I1124 13:57:32.440045  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:32.440097  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:32.443872  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:32.443941  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:32.469401  549693 cri.go:89] found id: ""
	I1124 13:57:32.469424  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.469434  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:32.469442  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:32.469496  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:32.496809  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:32.496832  549693 cri.go:89] found id: ""
	I1124 13:57:32.496842  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:32.496906  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:32.500527  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:32.500585  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:32.527346  549693 cri.go:89] found id: ""
	I1124 13:57:32.527369  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.527378  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:32.527385  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:32.527451  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:32.553285  549693 cri.go:89] found id: ""
	I1124 13:57:32.553309  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.553319  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:32.553331  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:32.553348  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:32.577411  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:32.577432  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:32.630224  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:32.630257  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:32.660133  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:32.660162  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:32.739270  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:32.739307  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:32.757046  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:32.757070  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:32.823854  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:32.823873  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:32.823903  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:32.858596  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:32.858646  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:33.265260  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:33.764600  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:34.265353  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:34.764610  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:35.265350  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:35.765089  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:36.264652  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:36.765004  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:37.264420  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:37.764671  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:38.264876  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:38.356087  571407 kubeadm.go:1114] duration metric: took 11.678328604s to wait for elevateKubeSystemPrivileges
	I1124 13:57:38.356138  571407 kubeadm.go:403] duration metric: took 21.878803001s to StartCluster
	I1124 13:57:38.356163  571407 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:38.356246  571407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:38.357783  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:38.358051  571407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:57:38.358088  571407 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:57:38.358147  571407 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:57:38.358255  571407 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-551674"
	I1124 13:57:38.358277  571407 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-551674"
	I1124 13:57:38.358300  571407 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:57:38.358316  571407 host.go:66] Checking if "old-k8s-version-551674" exists ...
	I1124 13:57:38.358356  571407 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-551674"
	I1124 13:57:38.358376  571407 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-551674"
	I1124 13:57:38.358846  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:38.359002  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:38.359747  571407 out.go:179] * Verifying Kubernetes components...
	I1124 13:57:38.361522  571407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:38.391918  571407 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-551674"
	I1124 13:57:38.391968  571407 host.go:66] Checking if "old-k8s-version-551674" exists ...
	I1124 13:57:38.392563  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:38.394763  571407 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:36.451226  573633 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.626840434s
	I1124 13:57:36.960539  573633 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.135640493s
	I1124 13:57:38.826634  573633 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002373914s
	I1124 13:57:38.840986  573633 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:57:38.851423  573633 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:57:38.860461  573633 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:57:38.860750  573633 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-495729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:57:38.868251  573633 kubeadm.go:319] [bootstrap-token] Using token: 48ihnp.vwtbijadec283ifs
	I1124 13:57:38.396071  571407 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:38.396092  571407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:57:38.396150  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:38.418200  571407 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:38.418287  571407 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:57:38.418389  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:38.427148  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:38.452725  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:38.477975  571407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:57:38.557120  571407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:38.568275  571407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:38.580397  571407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:38.734499  571407 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:57:38.735724  571407 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-551674" to be "Ready" ...
	I1124 13:57:38.974952  571407 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:57:38.869902  573633 out.go:252]   - Configuring RBAC rules ...
	I1124 13:57:38.870039  573633 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:57:38.873723  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:57:38.878666  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:57:38.881648  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:57:38.884769  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:57:38.889885  573633 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:57:39.234810  573633 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:57:39.655817  573633 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:57:35.405030  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:35.405441  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:35.405500  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:35.405562  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:35.436526  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:35.436546  549693 cri.go:89] found id: ""
	I1124 13:57:35.436556  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:35.436606  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:35.440553  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:35.440627  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:35.469691  549693 cri.go:89] found id: ""
	I1124 13:57:35.469714  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.469724  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:35.469731  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:35.469778  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:35.498349  549693 cri.go:89] found id: ""
	I1124 13:57:35.498374  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.498384  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:35.498392  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:35.498445  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:35.524590  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:35.524611  549693 cri.go:89] found id: ""
	I1124 13:57:35.524621  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:35.524672  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:35.529028  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:35.529079  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:35.559998  549693 cri.go:89] found id: ""
	I1124 13:57:35.560022  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.560032  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:35.560039  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:35.560088  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:35.589880  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:35.589924  549693 cri.go:89] found id: ""
	I1124 13:57:35.589935  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:35.589988  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:35.593704  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:35.593762  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:35.618198  549693 cri.go:89] found id: ""
	I1124 13:57:35.618221  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.618231  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:35.618238  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:35.618287  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:35.644239  549693 cri.go:89] found id: ""
	I1124 13:57:35.644261  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.644271  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:35.644283  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:35.644296  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:35.704869  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:35.704905  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:35.734591  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:35.734619  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:35.851103  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:35.851135  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:35.868937  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:35.868962  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:35.941457  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:35.941484  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:35.941500  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:35.982863  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:35.982912  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:36.041059  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:36.041094  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:38.575953  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:38.576325  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:38.576395  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:38.576458  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:38.609454  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:38.609479  549693 cri.go:89] found id: ""
	I1124 13:57:38.609490  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:38.609558  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:38.614057  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:38.614122  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:38.653884  549693 cri.go:89] found id: ""
	I1124 13:57:38.653944  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.653957  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:38.653965  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:38.654177  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:38.694950  549693 cri.go:89] found id: ""
	I1124 13:57:38.694982  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.694992  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:38.695000  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:38.695073  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:38.730951  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:38.731957  549693 cri.go:89] found id: ""
	I1124 13:57:38.731971  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:38.732043  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:38.737061  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:38.737131  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:38.772509  549693 cri.go:89] found id: ""
	I1124 13:57:38.772539  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.772552  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:38.772560  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:38.772620  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:38.807273  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:38.807296  549693 cri.go:89] found id: ""
	I1124 13:57:38.807306  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:38.807364  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:38.811473  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:38.811539  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:38.840830  549693 cri.go:89] found id: ""
	I1124 13:57:38.840858  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.840869  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:38.840878  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:38.840960  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:38.874818  549693 cri.go:89] found id: ""
	I1124 13:57:38.874843  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.874853  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:38.874866  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:38.874882  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:38.898369  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:38.898408  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:38.967437  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:38.967473  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:38.967491  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:39.001624  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:39.001656  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:39.051991  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:39.052020  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:39.079565  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:39.079589  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:39.133518  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:39.133552  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:39.171263  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:39.171297  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:40.232134  573633 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:57:40.233041  573633 kubeadm.go:319] 
	I1124 13:57:40.233131  573633 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:57:40.233139  573633 kubeadm.go:319] 
	I1124 13:57:40.233225  573633 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:57:40.233235  573633 kubeadm.go:319] 
	I1124 13:57:40.233261  573633 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:57:40.233393  573633 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:57:40.233486  573633 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:57:40.233505  573633 kubeadm.go:319] 
	I1124 13:57:40.233585  573633 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:57:40.233594  573633 kubeadm.go:319] 
	I1124 13:57:40.233688  573633 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:57:40.233698  573633 kubeadm.go:319] 
	I1124 13:57:40.233785  573633 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:57:40.233930  573633 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:57:40.234051  573633 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:57:40.234059  573633 kubeadm.go:319] 
	I1124 13:57:40.234181  573633 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:57:40.234294  573633 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:57:40.234303  573633 kubeadm.go:319] 
	I1124 13:57:40.234416  573633 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 48ihnp.vwtbijadec283ifs \
	I1124 13:57:40.234583  573633 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:57:40.234632  573633 kubeadm.go:319] 	--control-plane 
	I1124 13:57:40.234642  573633 kubeadm.go:319] 
	I1124 13:57:40.234762  573633 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:57:40.234772  573633 kubeadm.go:319] 
	I1124 13:57:40.234912  573633 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 48ihnp.vwtbijadec283ifs \
	I1124 13:57:40.235064  573633 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:57:40.236690  573633 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:57:40.236874  573633 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:57:40.236913  573633 cni.go:84] Creating CNI manager for ""
	I1124 13:57:40.236923  573633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:40.238422  573633 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:57:38.976426  571407 addons.go:530] duration metric: took 618.270366ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:57:39.240477  571407 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-551674" context rescaled to 1 replicas
	W1124 13:57:40.738964  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	W1124 13:57:42.739326  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	I1124 13:57:40.239630  573633 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:57:40.244652  573633 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:57:40.244672  573633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:57:40.258072  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:57:40.463145  573633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:57:40.463221  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:40.463229  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-495729 minikube.k8s.io/updated_at=2025_11_24T13_57_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-495729 minikube.k8s.io/primary=true
	I1124 13:57:40.546615  573633 ops.go:34] apiserver oom_adj: -16
	I1124 13:57:40.546689  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:41.047068  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:41.547628  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:42.047090  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:42.547841  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:43.047723  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:43.547225  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:44.047166  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:44.546815  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:44.613170  573633 kubeadm.go:1114] duration metric: took 4.150025246s to wait for elevateKubeSystemPrivileges
	I1124 13:57:44.613210  573633 kubeadm.go:403] duration metric: took 15.531076005s to StartCluster
	I1124 13:57:44.613229  573633 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:44.613290  573633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:44.614488  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:44.614707  573633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:57:44.614719  573633 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:57:44.614809  573633 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:57:44.614937  573633 addons.go:70] Setting storage-provisioner=true in profile "no-preload-495729"
	I1124 13:57:44.614961  573633 addons.go:239] Setting addon storage-provisioner=true in "no-preload-495729"
	I1124 13:57:44.614965  573633 addons.go:70] Setting default-storageclass=true in profile "no-preload-495729"
	I1124 13:57:44.615007  573633 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-495729"
	I1124 13:57:44.615020  573633 host.go:66] Checking if "no-preload-495729" exists ...
	I1124 13:57:44.614969  573633 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:44.615385  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:44.615544  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:44.616210  573633 out.go:179] * Verifying Kubernetes components...
	I1124 13:57:44.617567  573633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:44.637044  573633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:44.638487  573633 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:44.638507  573633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:57:44.638569  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:44.638634  573633 addons.go:239] Setting addon default-storageclass=true in "no-preload-495729"
	I1124 13:57:44.638680  573633 host.go:66] Checking if "no-preload-495729" exists ...
	I1124 13:57:44.639172  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:44.668307  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:44.671806  573633 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:44.671829  573633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:57:44.671908  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:44.694240  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:44.703940  573633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:57:44.764418  573633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:44.788662  573633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:44.813707  573633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:44.879458  573633 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 13:57:44.880723  573633 node_ready.go:35] waiting up to 6m0s for node "no-preload-495729" to be "Ready" ...
	I1124 13:57:45.096804  573633 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:57:45.098448  573633 addons.go:530] duration metric: took 483.641407ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:57:41.784356  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:41.784798  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:41.784856  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:41.784947  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:41.811621  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:41.811648  549693 cri.go:89] found id: ""
	I1124 13:57:41.811658  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:41.811704  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:41.815627  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:41.815685  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:41.842620  549693 cri.go:89] found id: ""
	I1124 13:57:41.842646  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.842657  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:41.842671  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:41.842723  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:41.867627  549693 cri.go:89] found id: ""
	I1124 13:57:41.867653  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.867663  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:41.867670  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:41.867720  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:41.892754  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:41.892774  549693 cri.go:89] found id: ""
	I1124 13:57:41.892784  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:41.892833  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:41.896560  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:41.896627  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:41.921407  549693 cri.go:89] found id: ""
	I1124 13:57:41.921427  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.921434  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:41.921440  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:41.921485  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:41.947566  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:41.947586  549693 cri.go:89] found id: ""
	I1124 13:57:41.947594  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:41.947645  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:41.951422  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:41.951474  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:41.975996  549693 cri.go:89] found id: ""
	I1124 13:57:41.976020  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.976030  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:41.976037  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:41.976079  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:42.000752  549693 cri.go:89] found id: ""
	I1124 13:57:42.000777  549693 logs.go:282] 0 containers: []
	W1124 13:57:42.000787  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:42.000798  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:42.000809  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:42.016535  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:42.016557  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:42.071718  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:42.071744  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:42.071761  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:42.105106  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:42.105136  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:42.151526  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:42.151556  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:42.177057  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:42.177084  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:42.228928  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:42.228955  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:42.256638  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:42.256661  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:44.839181  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:44.839657  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:44.839724  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:44.839783  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:44.874512  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:44.874558  549693 cri.go:89] found id: ""
	I1124 13:57:44.874569  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:44.874628  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:44.880817  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:44.880879  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:44.919086  549693 cri.go:89] found id: ""
	I1124 13:57:44.919116  549693 logs.go:282] 0 containers: []
	W1124 13:57:44.919127  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:44.919136  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:44.919192  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:44.953710  549693 cri.go:89] found id: ""
	I1124 13:57:44.953736  549693 logs.go:282] 0 containers: []
	W1124 13:57:44.953747  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:44.953756  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:44.953813  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:44.985405  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:44.985432  549693 cri.go:89] found id: ""
	I1124 13:57:44.985443  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:44.985500  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:44.989883  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:44.989990  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:45.019512  549693 cri.go:89] found id: ""
	I1124 13:57:45.019554  549693 logs.go:282] 0 containers: []
	W1124 13:57:45.019567  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:45.019575  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:45.019633  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:45.048774  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:45.048798  549693 cri.go:89] found id: ""
	I1124 13:57:45.048808  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:45.048872  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:45.053561  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:45.053629  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:45.086436  549693 cri.go:89] found id: ""
	I1124 13:57:45.086467  549693 logs.go:282] 0 containers: []
	W1124 13:57:45.086479  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:45.086487  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:45.086560  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:45.119591  549693 cri.go:89] found id: ""
	I1124 13:57:45.119620  549693 logs.go:282] 0 containers: []
	W1124 13:57:45.119631  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:45.119644  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:45.119659  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:45.171180  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:45.171213  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:45.199707  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:45.199738  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1124 13:57:44.739528  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	W1124 13:57:47.239175  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	I1124 13:57:45.383105  573633 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-495729" context rescaled to 1 replicas
	W1124 13:57:46.884687  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	W1124 13:57:49.384056  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	I1124 13:57:45.250283  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:45.250315  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:45.279720  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:45.279745  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:45.360786  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:45.360817  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:45.378763  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:45.378798  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:49.738537  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	I1124 13:57:51.238731  571407 node_ready.go:49] node "old-k8s-version-551674" is "Ready"
	I1124 13:57:51.238764  571407 node_ready.go:38] duration metric: took 12.503011397s for node "old-k8s-version-551674" to be "Ready" ...
	I1124 13:57:51.238781  571407 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:57:51.238850  571407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:57:51.254673  571407 api_server.go:72] duration metric: took 12.896544303s to wait for apiserver process to appear ...
	I1124 13:57:51.254695  571407 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:57:51.254714  571407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 13:57:51.260272  571407 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 13:57:51.261359  571407 api_server.go:141] control plane version: v1.28.0
	I1124 13:57:51.261382  571407 api_server.go:131] duration metric: took 6.681811ms to wait for apiserver health ...
	I1124 13:57:51.261391  571407 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:57:51.265577  571407 system_pods.go:59] 8 kube-system pods found
	I1124 13:57:51.265622  571407 system_pods.go:61] "coredns-5dd5756b68-swk4w" [ea9c4e37-9d2c-4148-b9cf-1961e1e7923f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:51.265632  571407 system_pods.go:61] "etcd-old-k8s-version-551674" [d41f7874-4dae-4aca-a539-6cc85c0fd65f] Running
	I1124 13:57:51.265656  571407 system_pods.go:61] "kindnet-sz57p" [a75b53b9-cf49-47a0-8184-f678d2dd7fbb] Running
	I1124 13:57:51.265662  571407 system_pods.go:61] "kube-apiserver-old-k8s-version-551674" [bbf37aff-faf4-4a12-8f3e-c16a85518770] Running
	I1124 13:57:51.265672  571407 system_pods.go:61] "kube-controller-manager-old-k8s-version-551674" [5b5b619d-b395-4abd-91d6-0fac3b34542e] Running
	I1124 13:57:51.265677  571407 system_pods.go:61] "kube-proxy-trn2x" [0e1df93d-97cc-48c1-9a95-18cd7d3f1a38] Running
	I1124 13:57:51.265682  571407 system_pods.go:61] "kube-scheduler-old-k8s-version-551674" [63eede78-ef6a-44ab-adeb-18bd57e833db] Running
	I1124 13:57:51.265690  571407 system_pods.go:61] "storage-provisioner" [d77a52ec-4e20-4ade-a015-7e4a4ea5baae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:51.265697  571407 system_pods.go:74] duration metric: took 4.300315ms to wait for pod list to return data ...
	I1124 13:57:51.265706  571407 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:57:51.268241  571407 default_sa.go:45] found service account: "default"
	I1124 13:57:51.268262  571407 default_sa.go:55] duration metric: took 2.550382ms for default service account to be created ...
	I1124 13:57:51.268272  571407 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:57:51.272099  571407 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:51.272132  571407 system_pods.go:89] "coredns-5dd5756b68-swk4w" [ea9c4e37-9d2c-4148-b9cf-1961e1e7923f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:51.272139  571407 system_pods.go:89] "etcd-old-k8s-version-551674" [d41f7874-4dae-4aca-a539-6cc85c0fd65f] Running
	I1124 13:57:51.272148  571407 system_pods.go:89] "kindnet-sz57p" [a75b53b9-cf49-47a0-8184-f678d2dd7fbb] Running
	I1124 13:57:51.272158  571407 system_pods.go:89] "kube-apiserver-old-k8s-version-551674" [bbf37aff-faf4-4a12-8f3e-c16a85518770] Running
	I1124 13:57:51.272165  571407 system_pods.go:89] "kube-controller-manager-old-k8s-version-551674" [5b5b619d-b395-4abd-91d6-0fac3b34542e] Running
	I1124 13:57:51.272171  571407 system_pods.go:89] "kube-proxy-trn2x" [0e1df93d-97cc-48c1-9a95-18cd7d3f1a38] Running
	I1124 13:57:51.272179  571407 system_pods.go:89] "kube-scheduler-old-k8s-version-551674" [63eede78-ef6a-44ab-adeb-18bd57e833db] Running
	I1124 13:57:51.272192  571407 system_pods.go:89] "storage-provisioner" [d77a52ec-4e20-4ade-a015-7e4a4ea5baae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:51.272221  571407 retry.go:31] will retry after 250.594322ms: missing components: kube-dns
	I1124 13:57:51.527051  571407 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:51.527080  571407 system_pods.go:89] "coredns-5dd5756b68-swk4w" [ea9c4e37-9d2c-4148-b9cf-1961e1e7923f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:51.527086  571407 system_pods.go:89] "etcd-old-k8s-version-551674" [d41f7874-4dae-4aca-a539-6cc85c0fd65f] Running
	I1124 13:57:51.527092  571407 system_pods.go:89] "kindnet-sz57p" [a75b53b9-cf49-47a0-8184-f678d2dd7fbb] Running
	I1124 13:57:51.527095  571407 system_pods.go:89] "kube-apiserver-old-k8s-version-551674" [bbf37aff-faf4-4a12-8f3e-c16a85518770] Running
	I1124 13:57:51.527099  571407 system_pods.go:89] "kube-controller-manager-old-k8s-version-551674" [5b5b619d-b395-4abd-91d6-0fac3b34542e] Running
	I1124 13:57:51.527103  571407 system_pods.go:89] "kube-proxy-trn2x" [0e1df93d-97cc-48c1-9a95-18cd7d3f1a38] Running
	I1124 13:57:51.527106  571407 system_pods.go:89] "kube-scheduler-old-k8s-version-551674" [63eede78-ef6a-44ab-adeb-18bd57e833db] Running
	I1124 13:57:51.527109  571407 system_pods.go:89] "storage-provisioner" [d77a52ec-4e20-4ade-a015-7e4a4ea5baae] Running
	I1124 13:57:51.527122  571407 system_pods.go:126] duration metric: took 258.838925ms to wait for k8s-apps to be running ...
	I1124 13:57:51.527133  571407 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:57:51.527179  571407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:57:51.540991  571407 system_svc.go:56] duration metric: took 13.84612ms WaitForService to wait for kubelet
	I1124 13:57:51.541021  571407 kubeadm.go:587] duration metric: took 13.182896831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:57:51.541038  571407 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:57:51.543114  571407 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:57:51.543141  571407 node_conditions.go:123] node cpu capacity is 8
	I1124 13:57:51.543180  571407 node_conditions.go:105] duration metric: took 2.135733ms to run NodePressure ...
	I1124 13:57:51.543201  571407 start.go:242] waiting for startup goroutines ...
	I1124 13:57:51.543213  571407 start.go:247] waiting for cluster config update ...
	I1124 13:57:51.543229  571407 start.go:256] writing updated cluster config ...
	I1124 13:57:51.543556  571407 ssh_runner.go:195] Run: rm -f paused
	I1124 13:57:51.547507  571407 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:57:51.551627  571407 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-swk4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.557529  571407 pod_ready.go:94] pod "coredns-5dd5756b68-swk4w" is "Ready"
	I1124 13:57:52.557561  571407 pod_ready.go:86] duration metric: took 1.005905574s for pod "coredns-5dd5756b68-swk4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.560039  571407 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.563934  571407 pod_ready.go:94] pod "etcd-old-k8s-version-551674" is "Ready"
	I1124 13:57:52.563954  571407 pod_ready.go:86] duration metric: took 3.893315ms for pod "etcd-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.566100  571407 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.569851  571407 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-551674" is "Ready"
	I1124 13:57:52.569872  571407 pod_ready.go:86] duration metric: took 3.754642ms for pod "kube-apiserver-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.572231  571407 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.754579  571407 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-551674" is "Ready"
	I1124 13:57:52.754602  571407 pod_ready.go:86] duration metric: took 182.352439ms for pod "kube-controller-manager-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.955707  571407 pod_ready.go:83] waiting for pod "kube-proxy-trn2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.355024  571407 pod_ready.go:94] pod "kube-proxy-trn2x" is "Ready"
	I1124 13:57:53.355055  571407 pod_ready.go:86] duration metric: took 399.32483ms for pod "kube-proxy-trn2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.555122  571407 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.954422  571407 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-551674" is "Ready"
	I1124 13:57:53.954447  571407 pod_ready.go:86] duration metric: took 399.299345ms for pod "kube-scheduler-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.954459  571407 pod_ready.go:40] duration metric: took 2.406920823s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:57:53.998980  571407 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 13:57:54.000631  571407 out.go:203] 
	W1124 13:57:54.001877  571407 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 13:57:54.003152  571407 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 13:57:54.004712  571407 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-551674" cluster and "default" namespace by default
	W1124 13:57:51.883563  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	W1124 13:57:53.884084  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	W1124 13:57:56.383941  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	I1124 13:57:58.383396  573633 node_ready.go:49] node "no-preload-495729" is "Ready"
	I1124 13:57:58.383426  573633 node_ready.go:38] duration metric: took 13.502676917s for node "no-preload-495729" to be "Ready" ...
	I1124 13:57:58.383444  573633 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:57:58.383501  573633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:57:58.395442  573633 api_server.go:72] duration metric: took 13.7806825s to wait for apiserver process to appear ...
	I1124 13:57:58.395467  573633 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:57:58.395493  573633 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 13:57:58.399257  573633 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 13:57:58.400109  573633 api_server.go:141] control plane version: v1.34.1
	I1124 13:57:58.400130  573633 api_server.go:131] duration metric: took 4.6575ms to wait for apiserver health ...
	I1124 13:57:58.400138  573633 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:57:58.402654  573633 system_pods.go:59] 8 kube-system pods found
	I1124 13:57:58.402688  573633 system_pods.go:61] "coredns-66bc5c9577-b7t2v" [cfd3642f-4fab-4d58-ac21-5c59c0820cb6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:58.402696  573633 system_pods.go:61] "etcd-no-preload-495729" [3c702450-6910-48ff-ab8d-b8edc83c0455] Running
	I1124 13:57:58.402705  573633 system_pods.go:61] "kindnet-mtrx6" [13e7beb5-16ec-46bf-b0b3-c8b800b38541] Running
	I1124 13:57:58.402715  573633 system_pods.go:61] "kube-apiserver-no-preload-495729" [73e7d6bd-36a7-43fb-87be-1800f46c11bc] Running
	I1124 13:57:58.402721  573633 system_pods.go:61] "kube-controller-manager-no-preload-495729" [786e6d00-16a0-41a3-a6d2-cdd177c24c58] Running
	I1124 13:57:58.402727  573633 system_pods.go:61] "kube-proxy-mxzvp" [2527db35-d2ad-41e5-941e-dec7f072eaad] Running
	I1124 13:57:58.402733  573633 system_pods.go:61] "kube-scheduler-no-preload-495729" [26eb6331-d799-47b4-b6cb-95796575d583] Running
	I1124 13:57:58.402743  573633 system_pods.go:61] "storage-provisioner" [0e767e38-974c-400e-8922-3120c696edf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:58.402750  573633 system_pods.go:74] duration metric: took 2.605391ms to wait for pod list to return data ...
	I1124 13:57:58.402760  573633 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:57:58.404727  573633 default_sa.go:45] found service account: "default"
	I1124 13:57:58.404744  573633 default_sa.go:55] duration metric: took 1.977462ms for default service account to be created ...
	I1124 13:57:58.404751  573633 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:57:58.406749  573633 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:58.406778  573633 system_pods.go:89] "coredns-66bc5c9577-b7t2v" [cfd3642f-4fab-4d58-ac21-5c59c0820cb6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:58.406785  573633 system_pods.go:89] "etcd-no-preload-495729" [3c702450-6910-48ff-ab8d-b8edc83c0455] Running
	I1124 13:57:58.406791  573633 system_pods.go:89] "kindnet-mtrx6" [13e7beb5-16ec-46bf-b0b3-c8b800b38541] Running
	I1124 13:57:58.406795  573633 system_pods.go:89] "kube-apiserver-no-preload-495729" [73e7d6bd-36a7-43fb-87be-1800f46c11bc] Running
	I1124 13:57:58.406799  573633 system_pods.go:89] "kube-controller-manager-no-preload-495729" [786e6d00-16a0-41a3-a6d2-cdd177c24c58] Running
	I1124 13:57:58.406802  573633 system_pods.go:89] "kube-proxy-mxzvp" [2527db35-d2ad-41e5-941e-dec7f072eaad] Running
	I1124 13:57:58.406806  573633 system_pods.go:89] "kube-scheduler-no-preload-495729" [26eb6331-d799-47b4-b6cb-95796575d583] Running
	I1124 13:57:58.406810  573633 system_pods.go:89] "storage-provisioner" [0e767e38-974c-400e-8922-3120c696edf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:58.406833  573633 retry.go:31] will retry after 280.890262ms: missing components: kube-dns
	I1124 13:57:58.691069  573633 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:58.691100  573633 system_pods.go:89] "coredns-66bc5c9577-b7t2v" [cfd3642f-4fab-4d58-ac21-5c59c0820cb6] Running
	I1124 13:57:58.691108  573633 system_pods.go:89] "etcd-no-preload-495729" [3c702450-6910-48ff-ab8d-b8edc83c0455] Running
	I1124 13:57:58.691113  573633 system_pods.go:89] "kindnet-mtrx6" [13e7beb5-16ec-46bf-b0b3-c8b800b38541] Running
	I1124 13:57:58.691123  573633 system_pods.go:89] "kube-apiserver-no-preload-495729" [73e7d6bd-36a7-43fb-87be-1800f46c11bc] Running
	I1124 13:57:58.691129  573633 system_pods.go:89] "kube-controller-manager-no-preload-495729" [786e6d00-16a0-41a3-a6d2-cdd177c24c58] Running
	I1124 13:57:58.691133  573633 system_pods.go:89] "kube-proxy-mxzvp" [2527db35-d2ad-41e5-941e-dec7f072eaad] Running
	I1124 13:57:58.691138  573633 system_pods.go:89] "kube-scheduler-no-preload-495729" [26eb6331-d799-47b4-b6cb-95796575d583] Running
	I1124 13:57:58.691142  573633 system_pods.go:89] "storage-provisioner" [0e767e38-974c-400e-8922-3120c696edf5] Running
	I1124 13:57:58.691152  573633 system_pods.go:126] duration metric: took 286.394896ms to wait for k8s-apps to be running ...
	I1124 13:57:58.691161  573633 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:57:58.691221  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:57:58.704298  573633 system_svc.go:56] duration metric: took 13.128643ms WaitForService to wait for kubelet
	I1124 13:57:58.704323  573633 kubeadm.go:587] duration metric: took 14.08956962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:57:58.704346  573633 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:57:58.706460  573633 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:57:58.706483  573633 node_conditions.go:123] node cpu capacity is 8
	I1124 13:57:58.706498  573633 node_conditions.go:105] duration metric: took 2.144337ms to run NodePressure ...
	I1124 13:57:58.706509  573633 start.go:242] waiting for startup goroutines ...
	I1124 13:57:58.706516  573633 start.go:247] waiting for cluster config update ...
	I1124 13:57:58.706526  573633 start.go:256] writing updated cluster config ...
	I1124 13:57:58.706762  573633 ssh_runner.go:195] Run: rm -f paused
	I1124 13:57:58.710405  573633 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:57:58.713121  573633 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b7t2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.716314  573633 pod_ready.go:94] pod "coredns-66bc5c9577-b7t2v" is "Ready"
	I1124 13:57:58.716337  573633 pod_ready.go:86] duration metric: took 3.194308ms for pod "coredns-66bc5c9577-b7t2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.717767  573633 pod_ready.go:83] waiting for pod "etcd-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.720799  573633 pod_ready.go:94] pod "etcd-no-preload-495729" is "Ready"
	I1124 13:57:58.720832  573633 pod_ready.go:86] duration metric: took 3.047272ms for pod "etcd-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.722338  573633 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.726678  573633 pod_ready.go:94] pod "kube-apiserver-no-preload-495729" is "Ready"
	I1124 13:57:58.726698  573633 pod_ready.go:86] duration metric: took 4.340286ms for pod "kube-apiserver-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.728224  573633 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.114568  573633 pod_ready.go:94] pod "kube-controller-manager-no-preload-495729" is "Ready"
	I1124 13:57:59.114594  573633 pod_ready.go:86] duration metric: took 386.354421ms for pod "kube-controller-manager-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.314263  573633 pod_ready.go:83] waiting for pod "kube-proxy-mxzvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.714595  573633 pod_ready.go:94] pod "kube-proxy-mxzvp" is "Ready"
	I1124 13:57:59.714626  573633 pod_ready.go:86] duration metric: took 400.335662ms for pod "kube-proxy-mxzvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.914675  573633 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:55.434961  549693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.056140636s)
	W1124 13:57:55.435004  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:57:55.435016  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:55.435032  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:57.968563  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:58:00.313610  573633 pod_ready.go:94] pod "kube-scheduler-no-preload-495729" is "Ready"
	I1124 13:58:00.313638  573633 pod_ready.go:86] duration metric: took 398.934376ms for pod "kube-scheduler-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:58:00.313651  573633 pod_ready.go:40] duration metric: took 1.603207509s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:58:00.356983  573633 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:58:00.358614  573633 out.go:179] * Done! kubectl is now configured to use "no-preload-495729" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:57:51 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:51.277497027Z" level=info msg="Started container" PID=2111 containerID=fe23a39bdb0d6c86fd526d9dde1128fcb638e0fc598b0da04dd62857d48eb15e description=kube-system/coredns-5dd5756b68-swk4w/coredns id=8600eebc-ff02-49e2-baa8-3dc1d2ebe1b9 name=/runtime.v1.RuntimeService/StartContainer sandboxID=fc02b7078892d1d82650d0859d1bf678a3f2f516220436dace39468632757d61
	Nov 24 13:57:51 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:51.27828712Z" level=info msg="Started container" PID=2110 containerID=b4cee7c963e6570c33fa55822dd9b2160ae7fe467e4ba2a74329d96fbc4f25ce description=kube-system/storage-provisioner/storage-provisioner id=4e556793-8819-43b9-979f-c394e018c357 name=/runtime.v1.RuntimeService/StartContainer sandboxID=aa7fe57a9bcd561bd4f8122f44f0ab7c7ea19946132ad3e4561aa057dbe68680
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.434672568Z" level=info msg="Running pod sandbox: default/busybox/POD" id=990e659b-093e-4080-9328-66579b3b0e59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.434756047Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.440056098Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:10fee0eaf2e151858cda9b326883523898c2764b74a72ac694abf15cab8fab1a UID:e3735245-8e28-4de0-a437-3a6f28002f38 NetNS:/var/run/netns/03d4289d-0e47-4818-bb25-9f826d556e22 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c0c728}] Aliases:map[]}"
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.440084101Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.450101425Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:10fee0eaf2e151858cda9b326883523898c2764b74a72ac694abf15cab8fab1a UID:e3735245-8e28-4de0-a437-3a6f28002f38 NetNS:/var/run/netns/03d4289d-0e47-4818-bb25-9f826d556e22 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000c0c728}] Aliases:map[]}"
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.45022167Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.450841887Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.45161218Z" level=info msg="Ran pod sandbox 10fee0eaf2e151858cda9b326883523898c2764b74a72ac694abf15cab8fab1a with infra container: default/busybox/POD" id=990e659b-093e-4080-9328-66579b3b0e59 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.452626631Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dbaa5299-c987-44c2-8372-27c2ecce7551 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.452723793Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dbaa5299-c987-44c2-8372-27c2ecce7551 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.45275202Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dbaa5299-c987-44c2-8372-27c2ecce7551 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.453237453Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=568607f3-459a-44d5-a19f-b80e9caabac1 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:57:54 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:54.454504229Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.180548819Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=568607f3-459a-44d5-a19f-b80e9caabac1 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.181345151Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=27a296eb-a267-4b33-ae95-3ebfa49c6399 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.182544671Z" level=info msg="Creating container: default/busybox/busybox" id=aa0cf6cb-1f8d-49c4-8503-1c017e717107 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.182674339Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.186181901Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.186556444Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.210035564Z" level=info msg="Created container 3f96337e7c4fdff418f9eab5dee1bafcfb2a81d397f68496526f87fdc6806171: default/busybox/busybox" id=aa0cf6cb-1f8d-49c4-8503-1c017e717107 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.210523636Z" level=info msg="Starting container: 3f96337e7c4fdff418f9eab5dee1bafcfb2a81d397f68496526f87fdc6806171" id=5fe8290f-5ac6-4c70-95c5-4520ecc013ea name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:57:55 old-k8s-version-551674 crio[780]: time="2025-11-24T13:57:55.212087636Z" level=info msg="Started container" PID=2187 containerID=3f96337e7c4fdff418f9eab5dee1bafcfb2a81d397f68496526f87fdc6806171 description=default/busybox/busybox id=5fe8290f-5ac6-4c70-95c5-4520ecc013ea name=/runtime.v1.RuntimeService/StartContainer sandboxID=10fee0eaf2e151858cda9b326883523898c2764b74a72ac694abf15cab8fab1a
	Nov 24 13:58:01 old-k8s-version-551674 crio[780]: time="2025-11-24T13:58:01.208619855Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	3f96337e7c4fd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   10fee0eaf2e15       busybox                                          default
	fe23a39bdb0d6       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      11 seconds ago      Running             coredns                   0                   fc02b7078892d       coredns-5dd5756b68-swk4w                         kube-system
	b4cee7c963e65       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   aa7fe57a9bcd5       storage-provisioner                              kube-system
	0293be263c045       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    22 seconds ago      Running             kindnet-cni               0                   5d1f674ea9a0a       kindnet-sz57p                                    kube-system
	8e7ca58087b20       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      24 seconds ago      Running             kube-proxy                0                   f00d7bb020304       kube-proxy-trn2x                                 kube-system
	a2c47a5f9ac00       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      41 seconds ago      Running             etcd                      0                   a86255b54b017       etcd-old-k8s-version-551674                      kube-system
	1d8d139b0ef04       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      41 seconds ago      Running             kube-controller-manager   0                   832151d2e9121       kube-controller-manager-old-k8s-version-551674   kube-system
	5e8e6352c3ed0       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      41 seconds ago      Running             kube-apiserver            0                   f0c8b7f6bc1fa       kube-apiserver-old-k8s-version-551674            kube-system
	2298716f28005       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      41 seconds ago      Running             kube-scheduler            0                   646dc19724ec4       kube-scheduler-old-k8s-version-551674            kube-system
	
	
	==> coredns [fe23a39bdb0d6c86fd526d9dde1128fcb638e0fc598b0da04dd62857d48eb15e] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:59875 - 25532 "HINFO IN 4246325178884143598.5284614949363738733. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.083404614s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-551674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-551674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-551674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_57_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-551674
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:57:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:57:56 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:57:56 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:57:56 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:57:56 +0000   Mon, 24 Nov 2025 13:57:50 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-551674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3bfe263a-8777-48b0-84b7-18ab723a148d
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-5dd5756b68-swk4w                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-old-k8s-version-551674                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         38s
	  kube-system                 kindnet-sz57p                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-old-k8s-version-551674             250m (3%)     0 (0%)      0 (0%)           0 (0%)         39s
	  kube-system                 kube-controller-manager-old-k8s-version-551674    200m (2%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 kube-proxy-trn2x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-old-k8s-version-551674             100m (1%)     0 (0%)      0 (0%)           0 (0%)         37s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 37s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s   kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s   kubelet          Node old-k8s-version-551674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s   kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node old-k8s-version-551674 event: Registered Node old-k8s-version-551674 in Controller
	  Normal  NodeReady                12s   kubelet          Node old-k8s-version-551674 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [a2c47a5f9ac0069dd39c788d0409524c9a115e1fe9697f6ee5be3dcc69ac54a9] <==
	{"level":"info","ts":"2025-11-24T13:57:21.366577Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:57:21.366589Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 2"}
	{"level":"info","ts":"2025-11-24T13:57:21.366601Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:57:21.367483Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:57:21.368167Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-551674 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T13:57:21.368203Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:57:21.368206Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:57:21.368299Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T13:57:21.368322Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T13:57:21.368592Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"da400bbece288f5a","local-member-id":"dfc97eb0aae75b33","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:57:21.368694Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:57:21.368731Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:57:21.369491Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"info","ts":"2025-11-24T13:57:21.369524Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T13:57:26.311353Z","caller":"traceutil/trace.go:171","msg":"trace[198157923] transaction","detail":"{read_only:false; response_revision:254; number_of_response:1; }","duration":"174.349312ms","start":"2025-11-24T13:57:26.136978Z","end":"2025-11-24T13:57:26.311327Z","steps":["trace[198157923] 'process raft request'  (duration: 174.225696ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:57:26.311347Z","caller":"traceutil/trace.go:171","msg":"trace[1006424505] transaction","detail":"{read_only:false; response_revision:253; number_of_response:1; }","duration":"179.330564ms","start":"2025-11-24T13:57:26.131972Z","end":"2025-11-24T13:57:26.311303Z","steps":["trace[1006424505] 'process raft request'  (duration: 101.76865ms)","trace[1006424505] 'compare'  (duration: 77.331514ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:57:26.585731Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"160.747352ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766388403062628 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-551674\" mod_revision:191 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-551674\" value_size:7371 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-551674\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:57:26.585826Z","caller":"traceutil/trace.go:171","msg":"trace[1799248672] linearizableReadLoop","detail":"{readStateIndex:265; appliedIndex:264; }","duration":"188.744634ms","start":"2025-11-24T13:57:26.397069Z","end":"2025-11-24T13:57:26.585813Z","steps":["trace[1799248672] 'read index received'  (duration: 27.349091ms)","trace[1799248672] 'applied index is now lower than readState.Index'  (duration: 161.394344ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:57:26.585831Z","caller":"traceutil/trace.go:171","msg":"trace[890717278] transaction","detail":"{read_only:false; response_revision:255; number_of_response:1; }","duration":"252.947118ms","start":"2025-11-24T13:57:26.332868Z","end":"2025-11-24T13:57:26.585815Z","steps":["trace[890717278] 'process raft request'  (duration: 91.572237ms)","trace[890717278] 'compare'  (duration: 160.649926ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:57:26.585914Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"270.813393ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpointslicemirroring-controller\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:57:26.58594Z","caller":"traceutil/trace.go:171","msg":"trace[767124614] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpointslicemirroring-controller; range_end:; response_count:0; response_revision:255; }","duration":"270.862639ms","start":"2025-11-24T13:57:26.31507Z","end":"2025-11-24T13:57:26.585933Z","steps":["trace[767124614] 'agreement among raft nodes before linearized reading'  (duration: 270.793235ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:57:26.649095Z","caller":"traceutil/trace.go:171","msg":"trace[247275551] transaction","detail":"{read_only:false; response_revision:256; number_of_response:1; }","duration":"248.75816ms","start":"2025-11-24T13:57:26.400321Z","end":"2025-11-24T13:57:26.649079Z","steps":["trace[247275551] 'process raft request'  (duration: 248.655478ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:57:26.649177Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"251.157085ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-old-k8s-version-551674\" ","response":"range_response_count:1 size:4107"}
	{"level":"info","ts":"2025-11-24T13:57:26.64921Z","caller":"traceutil/trace.go:171","msg":"trace[214577649] range","detail":"{range_begin:/registry/pods/kube-system/etcd-old-k8s-version-551674; range_end:; response_count:1; response_revision:256; }","duration":"251.201119ms","start":"2025-11-24T13:57:26.398Z","end":"2025-11-24T13:57:26.649201Z","steps":["trace[214577649] 'agreement among raft nodes before linearized reading'  (duration: 251.12497ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:57:37.696682Z","caller":"traceutil/trace.go:171","msg":"trace[1178738] transaction","detail":"{read_only:false; response_revision:286; number_of_response:1; }","duration":"135.359612ms","start":"2025-11-24T13:57:37.561302Z","end":"2025-11-24T13:57:37.696662Z","steps":["trace[1178738] 'process raft request'  (duration: 58.999941ms)","trace[1178738] 'compare'  (duration: 76.223843ms)"],"step_count":2}
	
	
	==> kernel <==
	 13:58:02 up  2:40,  0 user,  load average: 3.32, 3.28, 2.02
	Linux old-k8s-version-551674 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [0293be263c0454b7da1538975b04a6b95a9c21f3af50face6d4943ff1955195c] <==
	I1124 13:57:40.439013       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:57:40.439284       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:57:40.439453       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:57:40.439474       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:57:40.439492       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:57:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:57:40.733243       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:57:40.733289       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:57:40.733308       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:57:40.832421       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:57:41.033960       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:57:41.033980       1 metrics.go:72] Registering metrics
	I1124 13:57:41.034023       1 controller.go:711] "Syncing nftables rules"
	I1124 13:57:50.740972       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:57:50.741038       1 main.go:301] handling current node
	I1124 13:58:00.736461       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:58:00.736502       1 main.go:301] handling current node
	
	
	==> kube-apiserver [5e8e6352c3ed07dfe8a40464c8090cb5184794810b3f3d8d3c931a769fbaa97f] <==
	I1124 13:57:22.532229       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1124 13:57:22.533703       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 13:57:22.534253       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1124 13:57:22.534281       1 aggregator.go:166] initial CRD sync complete...
	I1124 13:57:22.534293       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 13:57:22.534301       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:57:22.534309       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:57:22.543172       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1124 13:57:22.543189       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1124 13:57:22.732448       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:57:23.436184       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:57:23.439718       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:57:23.439739       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:57:23.814920       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:57:23.844576       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:57:23.940313       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:57:23.945574       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:57:23.946545       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 13:57:23.950242       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:57:24.492067       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 13:57:25.202262       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 13:57:25.210967       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:57:25.219209       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1124 13:57:38.006263       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:57:38.247283       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [1d8d139b0ef0489a23c80411bac62f67f6597f4c91ccd432ffe1f58ef9d4fa5f] <==
	I1124 13:57:38.158683       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 13:57:38.164297       1 shared_informer.go:318] Caches are synced for deployment
	I1124 13:57:38.164338       1 shared_informer.go:318] Caches are synced for disruption
	I1124 13:57:38.197054       1 shared_informer.go:318] Caches are synced for resource quota
	I1124 13:57:38.239593       1 shared_informer.go:318] Caches are synced for ReplicaSet
	I1124 13:57:38.250452       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1124 13:57:38.353539       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-4htgj"
	I1124 13:57:38.361090       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-swk4w"
	I1124 13:57:38.371048       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="120.735099ms"
	I1124 13:57:38.394029       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="22.923455ms"
	I1124 13:57:38.394125       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="49.718µs"
	I1124 13:57:38.590681       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:57:38.638012       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:57:38.638057       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 13:57:38.762102       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1124 13:57:38.777180       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-4htgj"
	I1124 13:57:38.783303       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="21.539139ms"
	I1124 13:57:38.790527       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.16857ms"
	I1124 13:57:38.790621       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="46.619µs"
	I1124 13:57:50.920713       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="89.998µs"
	I1124 13:57:50.930253       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="82.069µs"
	I1124 13:57:51.357247       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="101.541µs"
	I1124 13:57:52.367416       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.465706ms"
	I1124 13:57:52.367517       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="62.26µs"
	I1124 13:57:52.990370       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	
	
	==> kube-proxy [8e7ca58087b20b43c6091f8148a29a53ffc4900c97751ae0ecea60d29ef28fe0] <==
	I1124 13:57:38.480337       1 server_others.go:69] "Using iptables proxy"
	I1124 13:57:38.491789       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 13:57:38.523162       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:57:38.527089       1 server_others.go:152] "Using iptables Proxier"
	I1124 13:57:38.527132       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 13:57:38.527142       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 13:57:38.527180       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 13:57:38.527405       1 server.go:846] "Version info" version="v1.28.0"
	I1124 13:57:38.527424       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:57:38.531380       1 config.go:188] "Starting service config controller"
	I1124 13:57:38.531433       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 13:57:38.531722       1 config.go:315] "Starting node config controller"
	I1124 13:57:38.531738       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 13:57:38.531903       1 config.go:97] "Starting endpoint slice config controller"
	I1124 13:57:38.531951       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 13:57:38.631880       1 shared_informer.go:318] Caches are synced for node config
	I1124 13:57:38.632003       1 shared_informer.go:318] Caches are synced for service config
	I1124 13:57:38.632047       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [2298716f28005dd19a00a3e83e1db16489600c6eaf2df2f440ea83cb2319c57c] <==
	W1124 13:57:22.486863       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1124 13:57:22.487415       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1124 13:57:22.488163       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1124 13:57:22.488207       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1124 13:57:22.488732       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1124 13:57:22.488758       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1124 13:57:22.488785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 13:57:22.488811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 13:57:22.489239       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 13:57:22.489273       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 13:57:22.489401       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 13:57:22.489470       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 13:57:23.403154       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1124 13:57:23.403187       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W1124 13:57:23.420473       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1124 13:57:23.420510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1124 13:57:23.425505       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1124 13:57:23.425529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1124 13:57:23.535922       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1124 13:57:23.535963       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1124 13:57:23.550695       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1124 13:57:23.550729       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W1124 13:57:23.647087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1124 13:57:23.647137       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I1124 13:57:23.871085       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.027616    1364 topology_manager.go:215] "Topology Admit Handler" podUID="a75b53b9-cf49-47a0-8184-f678d2dd7fbb" podNamespace="kube-system" podName="kindnet-sz57p"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052174    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e1df93d-97cc-48c1-9a95-18cd7d3f1a38-xtables-lock\") pod \"kube-proxy-trn2x\" (UID: \"0e1df93d-97cc-48c1-9a95-18cd7d3f1a38\") " pod="kube-system/kube-proxy-trn2x"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052231    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwt68\" (UniqueName: \"kubernetes.io/projected/a75b53b9-cf49-47a0-8184-f678d2dd7fbb-kube-api-access-kwt68\") pod \"kindnet-sz57p\" (UID: \"a75b53b9-cf49-47a0-8184-f678d2dd7fbb\") " pod="kube-system/kindnet-sz57p"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052267    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwklx\" (UniqueName: \"kubernetes.io/projected/0e1df93d-97cc-48c1-9a95-18cd7d3f1a38-kube-api-access-xwklx\") pod \"kube-proxy-trn2x\" (UID: \"0e1df93d-97cc-48c1-9a95-18cd7d3f1a38\") " pod="kube-system/kube-proxy-trn2x"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052294    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a75b53b9-cf49-47a0-8184-f678d2dd7fbb-xtables-lock\") pod \"kindnet-sz57p\" (UID: \"a75b53b9-cf49-47a0-8184-f678d2dd7fbb\") " pod="kube-system/kindnet-sz57p"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052329    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e1df93d-97cc-48c1-9a95-18cd7d3f1a38-lib-modules\") pod \"kube-proxy-trn2x\" (UID: \"0e1df93d-97cc-48c1-9a95-18cd7d3f1a38\") " pod="kube-system/kube-proxy-trn2x"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052360    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0e1df93d-97cc-48c1-9a95-18cd7d3f1a38-kube-proxy\") pod \"kube-proxy-trn2x\" (UID: \"0e1df93d-97cc-48c1-9a95-18cd7d3f1a38\") " pod="kube-system/kube-proxy-trn2x"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052431    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a75b53b9-cf49-47a0-8184-f678d2dd7fbb-cni-cfg\") pod \"kindnet-sz57p\" (UID: \"a75b53b9-cf49-47a0-8184-f678d2dd7fbb\") " pod="kube-system/kindnet-sz57p"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.052459    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a75b53b9-cf49-47a0-8184-f678d2dd7fbb-lib-modules\") pod \"kindnet-sz57p\" (UID: \"a75b53b9-cf49-47a0-8184-f678d2dd7fbb\") " pod="kube-system/kindnet-sz57p"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.122681    1364 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 13:57:38 old-k8s-version-551674 kubelet[1364]: I1124 13:57:38.123424    1364 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:57:40 old-k8s-version-551674 kubelet[1364]: I1124 13:57:40.338061    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-trn2x" podStartSLOduration=2.338000217 podCreationTimestamp="2025-11-24 13:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:39.336388625 +0000 UTC m=+14.161611039" watchObservedRunningTime="2025-11-24 13:57:40.338000217 +0000 UTC m=+15.163222614"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.896937    1364 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.920734    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-sz57p" podStartSLOduration=11.045354425 podCreationTimestamp="2025-11-24 13:57:38 +0000 UTC" firstStartedPulling="2025-11-24 13:57:38.34335468 +0000 UTC m=+13.168577067" lastFinishedPulling="2025-11-24 13:57:40.218681387 +0000 UTC m=+15.043903777" observedRunningTime="2025-11-24 13:57:40.338171802 +0000 UTC m=+15.163394191" watchObservedRunningTime="2025-11-24 13:57:50.920681135 +0000 UTC m=+25.745903533"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.921116    1364 topology_manager.go:215] "Topology Admit Handler" podUID="ea9c4e37-9d2c-4148-b9cf-1961e1e7923f" podNamespace="kube-system" podName="coredns-5dd5756b68-swk4w"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.922030    1364 topology_manager.go:215] "Topology Admit Handler" podUID="d77a52ec-4e20-4ade-a015-7e4a4ea5baae" podNamespace="kube-system" podName="storage-provisioner"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.947318    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/d77a52ec-4e20-4ade-a015-7e4a4ea5baae-tmp\") pod \"storage-provisioner\" (UID: \"d77a52ec-4e20-4ade-a015-7e4a4ea5baae\") " pod="kube-system/storage-provisioner"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.947372    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhst9\" (UniqueName: \"kubernetes.io/projected/ea9c4e37-9d2c-4148-b9cf-1961e1e7923f-kube-api-access-bhst9\") pod \"coredns-5dd5756b68-swk4w\" (UID: \"ea9c4e37-9d2c-4148-b9cf-1961e1e7923f\") " pod="kube-system/coredns-5dd5756b68-swk4w"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.947417    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96v5h\" (UniqueName: \"kubernetes.io/projected/d77a52ec-4e20-4ade-a015-7e4a4ea5baae-kube-api-access-96v5h\") pod \"storage-provisioner\" (UID: \"d77a52ec-4e20-4ade-a015-7e4a4ea5baae\") " pod="kube-system/storage-provisioner"
	Nov 24 13:57:50 old-k8s-version-551674 kubelet[1364]: I1124 13:57:50.947519    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea9c4e37-9d2c-4148-b9cf-1961e1e7923f-config-volume\") pod \"coredns-5dd5756b68-swk4w\" (UID: \"ea9c4e37-9d2c-4148-b9cf-1961e1e7923f\") " pod="kube-system/coredns-5dd5756b68-swk4w"
	Nov 24 13:57:51 old-k8s-version-551674 kubelet[1364]: I1124 13:57:51.368405    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-swk4w" podStartSLOduration=13.368353579 podCreationTimestamp="2025-11-24 13:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:51.35706835 +0000 UTC m=+26.182290746" watchObservedRunningTime="2025-11-24 13:57:51.368353579 +0000 UTC m=+26.193576040"
	Nov 24 13:57:51 old-k8s-version-551674 kubelet[1364]: I1124 13:57:51.368501    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.368483236 podCreationTimestamp="2025-11-24 13:57:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:51.36816456 +0000 UTC m=+26.193386980" watchObservedRunningTime="2025-11-24 13:57:51.368483236 +0000 UTC m=+26.193705632"
	Nov 24 13:57:54 old-k8s-version-551674 kubelet[1364]: I1124 13:57:54.132492    1364 topology_manager.go:215] "Topology Admit Handler" podUID="e3735245-8e28-4de0-a437-3a6f28002f38" podNamespace="default" podName="busybox"
	Nov 24 13:57:54 old-k8s-version-551674 kubelet[1364]: I1124 13:57:54.167335    1364 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84xgf\" (UniqueName: \"kubernetes.io/projected/e3735245-8e28-4de0-a437-3a6f28002f38-kube-api-access-84xgf\") pod \"busybox\" (UID: \"e3735245-8e28-4de0-a437-3a6f28002f38\") " pod="default/busybox"
	Nov 24 13:57:55 old-k8s-version-551674 kubelet[1364]: I1124 13:57:55.364561    1364 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.636639794 podCreationTimestamp="2025-11-24 13:57:54 +0000 UTC" firstStartedPulling="2025-11-24 13:57:54.452931165 +0000 UTC m=+29.278153554" lastFinishedPulling="2025-11-24 13:57:55.180805294 +0000 UTC m=+30.006027681" observedRunningTime="2025-11-24 13:57:55.36435558 +0000 UTC m=+30.189577997" watchObservedRunningTime="2025-11-24 13:57:55.364513921 +0000 UTC m=+30.189736318"
	
	
	==> storage-provisioner [b4cee7c963e6570c33fa55822dd9b2160ae7fe467e4ba2a74329d96fbc4f25ce] <==
	I1124 13:57:51.292942       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:57:51.302959       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:57:51.303055       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 13:57:51.309299       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:57:51.309407       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7422d999-6149-47d4-9886-755e6760dd69", APIVersion:"v1", ResourceVersion:"395", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-551674_c35a7b63-a093-4bb3-816b-ad1db2305cf0 became leader
	I1124 13:57:51.309453       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551674_c35a7b63-a093-4bb3-816b-ad1db2305cf0!
	I1124 13:57:51.410084       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551674_c35a7b63-a093-4bb3-816b-ad1db2305cf0!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551674 -n old-k8s-version-551674
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-551674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (241.721754ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:58:07Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-495729 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-495729 describe deploy/metrics-server -n kube-system: exit status 1 (56.066902ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-495729 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-495729
helpers_test.go:243: (dbg) docker inspect no-preload-495729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791",
	        "Created": "2025-11-24T13:57:11.035074993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 574069,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:57:11.065463773Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/hostname",
	        "HostsPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/hosts",
	        "LogPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791-json.log",
	        "Name": "/no-preload-495729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-495729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-495729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791",
	                "LowerDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-495729",
	                "Source": "/var/lib/docker/volumes/no-preload-495729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-495729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-495729",
	                "name.minikube.sigs.k8s.io": "no-preload-495729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "93880d98b3e78c69e820a8798f78c05fa2511f25d45b4d5e791c6ddd64c6b7c7",
	            "SandboxKey": "/var/run/docker/netns/93880d98b3e7",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33433"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33434"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33437"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33435"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33436"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-495729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "160c86453933d759975010a4980c48a41dc82dff079fabd600f1a15b1aa5b6c8",
	                    "EndpointID": "f0452a89b591868d93cb84505ae01b1316180dfb50de22ee4661d43b852bbbd1",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "62:0b:d1:05:c7:26",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-495729",
	                        "93c1bfb2fd2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-495729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-495729 logs -n 25: (1.021013578s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                          ARGS                                                                           │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p cilium-165759 sudo cat /etc/kubernetes/kubelet.conf                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /var/lib/kubelet/config.yaml                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status docker --all --full --no-pager                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat docker --no-pager                                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/docker/daemon.json                                                                                                       │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo docker system info                                                                                                                │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status cri-docker --all --full --no-pager                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat cri-docker --no-pager                                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                          │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                    │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cri-dockerd --version                                                                                                             │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status containerd --all --full --no-pager                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat containerd --no-pager                                                                                               │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /lib/systemd/system/containerd.service                                                                                        │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/containerd/config.toml                                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo containerd config dump                                                                                                            │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status crio --all --full --no-pager                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat crio --no-pager                                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                           │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo crio config                                                                                                                       │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ delete  │ -p cilium-165759                                                                                                                                        │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:57 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain            │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p old-k8s-version-551674 --alsologtostderr -v=3                                                                                                        │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                 │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	└─────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:57:10
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:57:10.218542  573633 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:57:10.218815  573633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:57:10.218825  573633 out.go:374] Setting ErrFile to fd 2...
	I1124 13:57:10.218830  573633 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:57:10.219076  573633 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:57:10.219557  573633 out.go:368] Setting JSON to false
	I1124 13:57:10.220662  573633 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9577,"bootTime":1763983053,"procs":317,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:57:10.220729  573633 start.go:143] virtualization: kvm guest
	I1124 13:57:10.222947  573633 out.go:179] * [no-preload-495729] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:57:10.224082  573633 notify.go:221] Checking for updates...
	I1124 13:57:10.224113  573633 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:57:10.225317  573633 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:57:10.226357  573633 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:10.227615  573633 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:57:10.228566  573633 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:57:10.229524  573633 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:57:10.230910  573633 config.go:182] Loaded profile config "cert-expiration-107341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:10.231019  573633 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:10.231099  573633 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:57:10.231200  573633 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:57:10.253437  573633 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:57:10.253503  573633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:57:10.307469  573633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:57:10.298427156 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:57:10.307577  573633 docker.go:319] overlay module found
	I1124 13:57:10.309035  573633 out.go:179] * Using the docker driver based on user configuration
	I1124 13:57:10.310000  573633 start.go:309] selected driver: docker
	I1124 13:57:10.310014  573633 start.go:927] validating driver "docker" against <nil>
	I1124 13:57:10.310024  573633 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:57:10.310561  573633 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:57:10.368837  573633 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 13:57:10.359083058 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:57:10.369009  573633 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:57:10.369221  573633 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:57:10.370583  573633 out.go:179] * Using Docker driver with root privileges
	I1124 13:57:10.371577  573633 cni.go:84] Creating CNI manager for ""
	I1124 13:57:10.371643  573633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:10.371653  573633 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:57:10.371722  573633 start.go:353] cluster config:
	{Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID
:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:57:10.372825  573633 out.go:179] * Starting "no-preload-495729" primary control-plane node in "no-preload-495729" cluster
	I1124 13:57:10.373834  573633 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:57:10.374930  573633 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:57:10.375871  573633 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:57:10.375926  573633 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:57:10.375971  573633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/config.json ...
	I1124 13:57:10.376012  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/config.json: {Name:mke1a0c7d43d3d88b3c393226f430e80d17dba2c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:10.376198  573633 cache.go:107] acquiring lock: {Name:mk764472169a1e016ae63c0caff778e680c6cc24 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376234  573633 cache.go:107] acquiring lock: {Name:mk669cb175129cf687c7e25066832b47953691e3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376240  573633 cache.go:107] acquiring lock: {Name:mka0650b538fb4091b2e54c68f59570306a77fce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376212  573633 cache.go:107] acquiring lock: {Name:mk5f01751f9e61bc354dc5d1166bb5f82b537ba6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376351  573633 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:57:10.376347  573633 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:10.376301  573633 cache.go:107] acquiring lock: {Name:mkcfb1dbf2a96e162ab77a7a3e525cb4ab2b83eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376292  573633 cache.go:107] acquiring lock: {Name:mk0942bbb6bc7b396b0ef16d0367e14ae5995fec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376213  573633 cache.go:107] acquiring lock: {Name:mk758ac789f0a6c975e003d2ce1360b045d19bd9 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376418  573633 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:10.376194  573633 cache.go:107] acquiring lock: {Name:mka7c11330b71ddccabe0a28536b2929e10c275d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.376577  573633 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:10.376589  573633 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:10.376631  573633 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:10.376659  573633 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:10.376668  573633 cache.go:115] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1124 13:57:10.376680  573633 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 510.198µs
	I1124 13:57:10.376697  573633 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1124 13:57:10.377470  573633 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:57:10.377537  573633 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:10.377547  573633 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:10.377685  573633 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:10.377723  573633 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:10.377732  573633 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:10.377704  573633 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:10.396669  573633 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:57:10.396687  573633 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:57:10.396707  573633 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:57:10.396756  573633 start.go:360] acquireMachinesLock for no-preload-495729: {Name:mk2b7a8448b6c656ea268c32a99c11369d347825 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:57:10.396846  573633 start.go:364] duration metric: took 70.67µs to acquireMachinesLock for "no-preload-495729"
	I1124 13:57:10.396873  573633 start.go:93] Provisioning new machine with config: &{Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:57:10.396969  573633 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:57:09.081017  571407 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v old-k8s-version-551674:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (5.112727594s)
	I1124 13:57:09.081055  571407 kic.go:203] duration metric: took 5.112892012s to extract preloaded images to volume ...
	W1124 13:57:09.081163  571407 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:57:09.081208  571407 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:57:09.081265  571407 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:57:09.142992  571407 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname old-k8s-version-551674 --name old-k8s-version-551674 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=old-k8s-version-551674 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=old-k8s-version-551674 --network old-k8s-version-551674 --ip 192.168.94.2 --volume old-k8s-version-551674:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:57:09.454604  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Running}}
	I1124 13:57:09.472872  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:09.491970  571407 cli_runner.go:164] Run: docker exec old-k8s-version-551674 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:57:09.541167  571407 oci.go:144] the created container "old-k8s-version-551674" has a running status.
	I1124 13:57:09.541193  571407 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa...
	I1124 13:57:09.590008  571407 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:57:09.619369  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:09.638216  571407 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:57:09.638235  571407 kic_runner.go:114] Args: [docker exec --privileged old-k8s-version-551674 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:57:09.680942  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:09.702755  571407 machine.go:94] provisionDockerMachine start ...
	I1124 13:57:09.702846  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:09.725575  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:09.726004  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:09.726027  571407 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:57:09.726816  571407 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:49568->127.0.0.1:33428: read: connection reset by peer
	I1124 13:57:12.869504  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-551674
	
	I1124 13:57:12.869547  571407 ubuntu.go:182] provisioning hostname "old-k8s-version-551674"
	I1124 13:57:12.869619  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:12.887539  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.887814  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:12.887829  571407 main.go:143] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-551674 && echo "old-k8s-version-551674" | sudo tee /etc/hostname
	I1124 13:57:13.040135  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: old-k8s-version-551674
	
	I1124 13:57:13.040227  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.059060  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:13.059344  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:13.059382  571407 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-551674' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-551674/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-551674' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:57:10.399195  573633 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:57:10.399402  573633 start.go:159] libmachine.API.Create for "no-preload-495729" (driver="docker")
	I1124 13:57:10.399436  573633 client.go:173] LocalClient.Create starting
	I1124 13:57:10.399495  573633 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:57:10.399537  573633 main.go:143] libmachine: Decoding PEM data...
	I1124 13:57:10.399566  573633 main.go:143] libmachine: Parsing certificate...
	I1124 13:57:10.399624  573633 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:57:10.399652  573633 main.go:143] libmachine: Decoding PEM data...
	I1124 13:57:10.399671  573633 main.go:143] libmachine: Parsing certificate...
	I1124 13:57:10.400028  573633 cli_runner.go:164] Run: docker network inspect no-preload-495729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:57:10.416945  573633 cli_runner.go:211] docker network inspect no-preload-495729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:57:10.417009  573633 network_create.go:284] running [docker network inspect no-preload-495729] to gather additional debugging logs...
	I1124 13:57:10.417030  573633 cli_runner.go:164] Run: docker network inspect no-preload-495729
	W1124 13:57:10.431256  573633 cli_runner.go:211] docker network inspect no-preload-495729 returned with exit code 1
	I1124 13:57:10.431278  573633 network_create.go:287] error running [docker network inspect no-preload-495729]: docker network inspect no-preload-495729: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-495729 not found
	I1124 13:57:10.431291  573633 network_create.go:289] output of [docker network inspect no-preload-495729]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-495729 not found
	
	** /stderr **
	I1124 13:57:10.431357  573633 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:57:10.447792  573633 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 13:57:10.448788  573633 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 13:57:10.449582  573633 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 13:57:10.450860  573633 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-283ea71f66a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:70:12:a2:88:dd} reservation:<nil>}
	I1124 13:57:10.451371  573633 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-6303f2fb88a2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:76:39:35:0d:14:96} reservation:<nil>}
	I1124 13:57:10.451870  573633 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-584350b1ae00 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:72:e5:2a:e9:2d:0e} reservation:<nil>}
	I1124 13:57:10.452479  573633 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c81450}
	I1124 13:57:10.452501  573633 network_create.go:124] attempt to create docker network no-preload-495729 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:57:10.452539  573633 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-495729 no-preload-495729
	I1124 13:57:10.500646  573633 network_create.go:108] docker network no-preload-495729 192.168.103.0/24 created
	I1124 13:57:10.500671  573633 kic.go:121] calculated static IP "192.168.103.2" for the "no-preload-495729" container
	I1124 13:57:10.500737  573633 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:57:10.517258  573633 cli_runner.go:164] Run: docker volume create no-preload-495729 --label name.minikube.sigs.k8s.io=no-preload-495729 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:57:10.533225  573633 oci.go:103] Successfully created a docker volume no-preload-495729
	I1124 13:57:10.533293  573633 cli_runner.go:164] Run: docker run --rm --name no-preload-495729-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-495729 --entrypoint /usr/bin/test -v no-preload-495729:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:57:10.538879  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:57:10.543489  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:57:10.551718  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:57:10.557314  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:57:10.569241  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:57:10.581346  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:57:10.586504  573633 cache.go:162] opening:  /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:57:10.672583  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1124 13:57:10.672619  573633 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 296.383762ms
	I1124 13:57:10.672637  573633 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1124 13:57:10.949368  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1124 13:57:10.949394  573633 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 573.217784ms
	I1124 13:57:10.949405  573633 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1124 13:57:10.964675  573633 oci.go:107] Successfully prepared a docker volume no-preload-495729
	I1124 13:57:10.964718  573633 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	W1124 13:57:10.964794  573633 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:57:10.964821  573633 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:57:10.964859  573633 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:57:11.019986  573633 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-495729 --name no-preload-495729 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-495729 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-495729 --network no-preload-495729 --ip 192.168.103.2 --volume no-preload-495729:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:57:11.319016  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Running}}
	I1124 13:57:11.336152  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:11.352975  573633 cli_runner.go:164] Run: docker exec no-preload-495729 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:57:11.396927  573633 oci.go:144] the created container "no-preload-495729" has a running status.
	I1124 13:57:11.396962  573633 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa...
	I1124 13:57:11.732240  573633 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:57:11.745457  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1124 13:57:11.745486  573633 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 1.369244968s
	I1124 13:57:11.745503  573633 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1124 13:57:11.760356  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:11.782516  573633 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:57:11.782539  573633 kic_runner.go:114] Args: [docker exec --privileged no-preload-495729 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:57:11.834128  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:11.857036  573633 machine.go:94] provisionDockerMachine start ...
	I1124 13:57:11.857148  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:11.878158  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:11.878519  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:11.878554  573633 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:57:11.916024  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1124 13:57:11.916057  573633 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 1.539865062s
	I1124 13:57:11.916074  573633 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1124 13:57:11.965704  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1124 13:57:11.965740  573633 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 1.589506659s
	I1124 13:57:11.965756  573633 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1124 13:57:12.041467  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-495729
	
	I1124 13:57:12.041497  573633 ubuntu.go:182] provisioning hostname "no-preload-495729"
	I1124 13:57:12.041569  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.062646  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.062900  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:12.062921  573633 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-495729 && echo "no-preload-495729" | sudo tee /etc/hostname
	I1124 13:57:12.105540  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1124 13:57:12.105568  573633 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 1.729334606s
	I1124 13:57:12.105583  573633 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1124 13:57:12.215707  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-495729
	
	I1124 13:57:12.215782  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.232680  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.232988  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:12.233021  573633 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-495729' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-495729/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-495729' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:57:12.275167  573633 cache.go:157] /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1124 13:57:12.275194  573633 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 1.898984854s
	I1124 13:57:12.275209  573633 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1124 13:57:12.275229  573633 cache.go:87] Successfully saved all images to host disk.
	I1124 13:57:12.375110  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:57:12.375136  573633 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:57:12.375161  573633 ubuntu.go:190] setting up certificates
	I1124 13:57:12.375184  573633 provision.go:84] configureAuth start
	I1124 13:57:12.375247  573633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-495729
	I1124 13:57:12.391664  573633 provision.go:143] copyHostCerts
	I1124 13:57:12.391728  573633 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:57:12.391743  573633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:57:12.391811  573633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:57:12.391940  573633 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:57:12.391953  573633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:57:12.391995  573633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:57:12.392079  573633 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:57:12.392089  573633 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:57:12.392126  573633 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:57:12.392197  573633 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.no-preload-495729 san=[127.0.0.1 192.168.103.2 localhost minikube no-preload-495729]
	I1124 13:57:12.455602  573633 provision.go:177] copyRemoteCerts
	I1124 13:57:12.455660  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:57:12.455713  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.472068  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:12.572699  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:57:12.591571  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 13:57:12.609715  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:57:12.626247  573633 provision.go:87] duration metric: took 251.047769ms to configureAuth
	I1124 13:57:12.626269  573633 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:57:12.626406  573633 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:12.626497  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.643097  573633 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:12.643297  573633 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33433 <nil> <nil>}
	I1124 13:57:12.643311  573633 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:57:12.926488  573633 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:57:12.926513  573633 machine.go:97] duration metric: took 1.069448048s to provisionDockerMachine
	I1124 13:57:12.926526  573633 client.go:176] duration metric: took 2.527082252s to LocalClient.Create
	I1124 13:57:12.926542  573633 start.go:167] duration metric: took 2.527140782s to libmachine.API.Create "no-preload-495729"
	I1124 13:57:12.926551  573633 start.go:293] postStartSetup for "no-preload-495729" (driver="docker")
	I1124 13:57:12.926563  573633 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:57:12.926625  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:57:12.926665  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:12.945012  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.045606  573633 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:57:13.049018  573633 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:57:13.049043  573633 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:57:13.049054  573633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:57:13.049104  573633 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:57:13.049186  573633 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:57:13.049301  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:57:13.056718  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:13.075452  573633 start.go:296] duration metric: took 148.886485ms for postStartSetup
	I1124 13:57:13.075793  573633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-495729
	I1124 13:57:13.092598  573633 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/config.json ...
	I1124 13:57:13.092813  573633 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:57:13.092858  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:13.109348  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.205859  573633 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:57:13.210582  573633 start.go:128] duration metric: took 2.813599257s to createHost
	I1124 13:57:13.210607  573633 start.go:83] releasing machines lock for "no-preload-495729", held for 2.813746179s
	I1124 13:57:13.210676  573633 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-495729
	I1124 13:57:13.228698  573633 ssh_runner.go:195] Run: cat /version.json
	I1124 13:57:13.228742  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:13.228818  573633 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:57:13.228905  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:13.247411  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.247812  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:13.343531  573633 ssh_runner.go:195] Run: systemctl --version
	I1124 13:57:13.397825  573633 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:57:13.433057  573633 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:57:13.437611  573633 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:57:13.437678  573633 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:57:13.461709  573633 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:57:13.461730  573633 start.go:496] detecting cgroup driver to use...
	I1124 13:57:13.461764  573633 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:57:13.461815  573633 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:57:13.477030  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:57:13.488372  573633 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:57:13.488423  573633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:57:13.504261  573633 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:57:13.522485  573633 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:57:13.616540  573633 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:57:13.708748  573633 docker.go:234] disabling docker service ...
	I1124 13:57:13.708810  573633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:57:13.727572  573633 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:57:13.740700  573633 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:57:13.832100  573633 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:57:13.927324  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:57:13.939714  573633 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:57:13.953474  573633 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:57:13.953536  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.962954  573633 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:57:13.963008  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.971194  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.979518  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:13.987857  573633 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:57:13.995322  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.003141  573633 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.015247  573633 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.023120  573633 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:57:14.029999  573633 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:57:14.037215  573633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:14.120958  573633 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:57:14.583995  573633 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:57:14.584071  573633 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:57:14.587936  573633 start.go:564] Will wait 60s for crictl version
	I1124 13:57:14.588004  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.591330  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:57:14.614989  573633 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:57:14.615057  573633 ssh_runner.go:195] Run: crio --version
	I1124 13:57:14.644346  573633 ssh_runner.go:195] Run: crio --version
	I1124 13:57:14.680867  573633 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:57:14.682195  573633 cli_runner.go:164] Run: docker network inspect no-preload-495729 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:57:14.698662  573633 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1124 13:57:14.702751  573633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:14.712750  573633 kubeadm.go:884] updating cluster {Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:57:14.712900  573633 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:57:14.712952  573633 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:57:14.738848  573633 crio.go:510] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.34.1". assuming images are not preloaded.
	I1124 13:57:14.738868  573633 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.34.1 registry.k8s.io/kube-controller-manager:v1.34.1 registry.k8s.io/kube-scheduler:v1.34.1 registry.k8s.io/kube-proxy:v1.34.1 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.4-0 registry.k8s.io/coredns/coredns:v1.12.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1124 13:57:14.738947  573633 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.738965  573633 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:14.738977  573633 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.738981  573633 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1124 13:57:14.738953  573633 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.738979  573633 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.739006  573633 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.738991  573633 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.740266  573633 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.4-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.740283  573633 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1124 13:57:14.740283  573633 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.12.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.740266  573633 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.740266  573633 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.740323  573633 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.740323  573633 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:14.740355  573633 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.34.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.869183  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.872615  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.876185  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.887969  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.898500  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.910588  573633 cache_images.go:118] "registry.k8s.io/etcd:3.6.4-0" needs transfer: "registry.k8s.io/etcd:3.6.4-0" does not exist at hash "5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115" in container runtime
	I1124 13:57:14.910638  573633 cri.go:218] Removing image: registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.910685  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.913079  573633 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.34.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.34.1" does not exist at hash "c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f" in container runtime
	I1124 13:57:14.913122  573633 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.913171  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.915095  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.917783  573633 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.34.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.34.1" does not exist at hash "c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97" in container runtime
	I1124 13:57:14.917821  573633 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.917858  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.930620  573633 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.12.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.12.1" does not exist at hash "52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969" in container runtime
	I1124 13:57:14.930685  573633 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.930728  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.940096  573633 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.34.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.34.1" does not exist at hash "7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813" in container runtime
	I1124 13:57:14.940111  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.940133  573633 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.940164  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.940183  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.953091  573633 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.34.1" needs transfer: "registry.k8s.io/kube-proxy:v1.34.1" does not exist at hash "fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7" in container runtime
	I1124 13:57:14.953131  573633 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.953162  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.953185  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:14.953169  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:14.970208  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:14.970208  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:14.970268  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:14.988704  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:14.989985  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:14.989995  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:15.005991  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.34.1
	I1124 13:57:15.006122  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:15.006264  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/etcd:3.6.4-0
	I1124 13:57:15.025528  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.12.1
	I1124 13:57:15.025615  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.34.1
	I1124 13:57:15.027800  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:15.042874  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0
	I1124 13:57:15.042974  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:57:15.043015  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1
	I1124 13:57:15.043070  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.34.1
	I1124 13:57:15.043087  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:57:15.059703  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1
	I1124 13:57:15.059785  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1
	I1124 13:57:15.059795  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:57:15.059853  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.4-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.4-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.4-0': No such file or directory
	I1124 13:57:15.059877  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/kube-proxy:v1.34.1
	I1124 13:57:15.059877  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 --> /var/lib/minikube/images/etcd_3.6.4-0 (74320896 bytes)
	I1124 13:57:15.059918  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:57:15.059913  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.34.1': No such file or directory
	I1124 13:57:15.059937  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 --> /var/lib/minikube/images/kube-controller-manager_v1.34.1 (22831104 bytes)
	I1124 13:57:15.091430  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.34.1': No such file or directory
	I1124 13:57:15.091460  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 --> /var/lib/minikube/images/kube-apiserver_v1.34.1 (27073024 bytes)
	I1124 13:57:15.092766  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1
	I1124 13:57:15.092861  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:57:15.092912  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.12.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.12.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.12.1': No such file or directory
	I1124 13:57:15.092958  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 --> /var/lib/minikube/images/coredns_v1.12.1 (22394368 bytes)
	I1124 13:57:15.093610  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1
	I1124 13:57:15.093696  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:57:15.196219  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.216101  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.34.1': No such file or directory
	I1124 13:57:15.216131  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.34.1: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.34.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.34.1': No such file or directory
	I1124 13:57:15.216143  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 --> /var/lib/minikube/images/kube-proxy_v1.34.1 (25966080 bytes)
	I1124 13:57:15.216159  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 --> /var/lib/minikube/images/kube-scheduler_v1.34.1 (17396736 bytes)
	I1124 13:57:13.519562  549693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.073756563s)
	W1124 13:57:13.519597  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:57:13.519606  549693 logs.go:123] Gathering logs for kube-apiserver [281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3] ...
	I1124 13:57:13.519620  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3"
	I1124 13:57:13.558382  549693 logs.go:123] Gathering logs for kube-controller-manager [1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9] ...
	I1124 13:57:13.558419  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9"
	I1124 13:57:13.589387  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:13.589418  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:13.639234  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:13.639260  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:13.710654  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:13.710684  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:13.742326  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:13.742350  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:13.793435  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:13.793464  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:13.822589  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:13.822622  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:13.853301  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:13.853328  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:13.202350  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:57:13.202379  571407 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:57:13.202426  571407 ubuntu.go:190] setting up certificates
	I1124 13:57:13.202439  571407 provision.go:84] configureAuth start
	I1124 13:57:13.202498  571407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551674
	I1124 13:57:13.221012  571407 provision.go:143] copyHostCerts
	I1124 13:57:13.221073  571407 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:57:13.221087  571407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:57:13.221151  571407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:57:13.221273  571407 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:57:13.221284  571407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:57:13.221318  571407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:57:13.221407  571407 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:57:13.221417  571407 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:57:13.221447  571407 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:57:13.221524  571407 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-551674 san=[127.0.0.1 192.168.94.2 localhost minikube old-k8s-version-551674]
	I1124 13:57:13.398720  571407 provision.go:177] copyRemoteCerts
	I1124 13:57:13.398770  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:57:13.398802  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.416935  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:13.518029  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:57:13.538168  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I1124 13:57:13.564571  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 13:57:13.583942  571407 provision.go:87] duration metric: took 381.485915ms to configureAuth
	I1124 13:57:13.583973  571407 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:57:13.584161  571407 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:57:13.584302  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.602852  571407 main.go:143] libmachine: Using SSH client type: native
	I1124 13:57:13.603185  571407 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33428 <nil> <nil>}
	I1124 13:57:13.603215  571407 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:57:13.914557  571407 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:57:13.914589  571407 machine.go:97] duration metric: took 4.211807799s to provisionDockerMachine
	I1124 13:57:13.914600  571407 client.go:176] duration metric: took 10.496435799s to LocalClient.Create
	I1124 13:57:13.914621  571407 start.go:167] duration metric: took 10.496496006s to libmachine.API.Create "old-k8s-version-551674"
	I1124 13:57:13.914630  571407 start.go:293] postStartSetup for "old-k8s-version-551674" (driver="docker")
	I1124 13:57:13.914643  571407 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:57:13.914705  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:57:13.914750  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:13.932579  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.034959  571407 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:57:14.038500  571407 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:57:14.038524  571407 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:57:14.038534  571407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:57:14.038589  571407 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:57:14.038685  571407 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:57:14.038849  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:57:14.046043  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:14.066850  571407 start.go:296] duration metric: took 152.203471ms for postStartSetup
	I1124 13:57:14.067252  571407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551674
	I1124 13:57:14.086950  571407 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/config.json ...
	I1124 13:57:14.087194  571407 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:57:14.087267  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:14.103520  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.200726  571407 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:57:14.205096  571407 start.go:128] duration metric: took 10.788537853s to createHost
	I1124 13:57:14.205121  571407 start.go:83] releasing machines lock for "old-k8s-version-551674", held for 10.788676619s
	I1124 13:57:14.205194  571407 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" old-k8s-version-551674
	I1124 13:57:14.222491  571407 ssh_runner.go:195] Run: cat /version.json
	I1124 13:57:14.222545  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:14.222561  571407 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:57:14.222642  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:14.239699  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.240606  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:14.407100  571407 ssh_runner.go:195] Run: systemctl --version
	I1124 13:57:14.413662  571407 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:57:14.447074  571407 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:57:14.451669  571407 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:57:14.451734  571407 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:57:14.480870  571407 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:57:14.480911  571407 start.go:496] detecting cgroup driver to use...
	I1124 13:57:14.480950  571407 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:57:14.481002  571407 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:57:14.498242  571407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:57:14.511398  571407 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:57:14.511446  571407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:57:14.527283  571407 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:57:14.545151  571407 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:57:14.627820  571407 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:57:14.723386  571407 docker.go:234] disabling docker service ...
	I1124 13:57:14.723448  571407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:57:14.743576  571407 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:57:14.755922  571407 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:57:14.840061  571407 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:57:14.943460  571407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:57:14.959320  571407 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:57:14.977831  571407 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I1124 13:57:14.977911  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:14.992400  571407 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:57:14.992460  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.005051  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.017430  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.031626  571407 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:57:15.044305  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.056322  571407 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.075714  571407 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:57:15.086756  571407 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:57:15.095024  571407 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:57:15.102382  571407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:15.185193  571407 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:57:15.346750  571407 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:57:15.346825  571407 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:57:15.352113  571407 start.go:564] Will wait 60s for crictl version
	I1124 13:57:15.352172  571407 ssh_runner.go:195] Run: which crictl
	I1124 13:57:15.356702  571407 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:57:15.389387  571407 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:57:15.389481  571407 ssh_runner.go:195] Run: crio --version
	I1124 13:57:15.429438  571407 ssh_runner.go:195] Run: crio --version
	I1124 13:57:15.473653  571407 out.go:179] * Preparing Kubernetes v1.28.0 on CRI-O 1.34.2 ...
	I1124 13:57:15.474976  571407 cli_runner.go:164] Run: docker network inspect old-k8s-version-551674 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:57:15.499863  571407 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 13:57:15.505506  571407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:15.519694  571407 kubeadm.go:884] updating cluster {Name:old-k8s-version-551674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-551674 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:57:15.519846  571407 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
	I1124 13:57:15.519915  571407 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:57:15.564015  571407 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:57:15.564042  571407 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:57:15.564110  571407 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:57:15.593925  571407 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:57:15.593954  571407 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:57:15.593964  571407 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.28.0 crio true true} ...
	I1124 13:57:15.594064  571407 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=old-k8s-version-551674 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-551674 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:57:15.594136  571407 ssh_runner.go:195] Run: crio config
	I1124 13:57:15.664717  571407 cni.go:84] Creating CNI manager for ""
	I1124 13:57:15.664740  571407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:15.664758  571407 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:57:15.664783  571407 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-551674 NodeName:old-k8s-version-551674 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:57:15.664933  571407 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "old-k8s-version-551674"
	  kubeletExtraArgs:
	    node-ip: 192.168.94.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:57:15.664996  571407 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1124 13:57:15.673875  571407 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:57:15.673953  571407 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:57:15.685520  571407 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I1124 13:57:15.701279  571407 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:57:15.716241  571407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2159 bytes)
	I1124 13:57:15.728309  571407 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:57:15.731923  571407 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:15.741828  571407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:15.820551  571407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:15.841024  571407 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674 for IP: 192.168.94.2
	I1124 13:57:15.841046  571407 certs.go:195] generating shared ca certs ...
	I1124 13:57:15.841065  571407 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.841226  571407 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:57:15.841291  571407 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:57:15.841305  571407 certs.go:257] generating profile certs ...
	I1124 13:57:15.841368  571407 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.key
	I1124 13:57:15.841382  571407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt with IP's: []
	I1124 13:57:15.913400  571407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt ...
	I1124 13:57:15.913425  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: {Name:mk49b7f1d5ae517a4372141da3d88bc1e1a6f1d8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.913612  571407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.key ...
	I1124 13:57:15.913629  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.key: {Name:mk211dfe7ae53822a5305fc5bb636e978477bda0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.913773  571407 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2
	I1124 13:57:15.913797  571407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1124 13:57:15.975994  571407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2 ...
	I1124 13:57:15.976014  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2: {Name:mk707ad7d5fc3abfd025bfbdb2ef4548d9633c71 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.976163  571407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2 ...
	I1124 13:57:15.976184  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2: {Name:mkb6618ed8dd343e3fa22300407a727e0fdb5dc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:15.976296  571407 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt.a0d4b1b2 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt
	I1124 13:57:15.976370  571407 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key.a0d4b1b2 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key
	I1124 13:57:15.976423  571407 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key
	I1124 13:57:15.976437  571407 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt with IP's: []
	I1124 13:57:16.029865  571407 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt ...
	I1124 13:57:16.029896  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt: {Name:mke8c630c68d97aa112356eb2a1d2857d817178e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:16.030077  571407 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key ...
	I1124 13:57:16.030095  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key: {Name:mkf3bd1fd01857aa08eceac6ffacefd52aca0f9f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:16.030315  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 13:57:16.030353  571407 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 13:57:16.030363  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:57:16.030387  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:57:16.030419  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:57:16.030445  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:57:16.030486  571407 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:16.031086  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:57:16.048988  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:57:16.065309  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:57:16.082398  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:57:16.098911  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1124 13:57:16.115120  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 13:57:16.131401  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:57:16.148615  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:57:16.199862  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 13:57:16.260944  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:57:16.277737  571407 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 13:57:16.294173  571407 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:57:16.306075  571407 ssh_runner.go:195] Run: openssl version
	I1124 13:57:16.311813  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 13:57:16.319518  571407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 13:57:16.322880  571407 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 13:57:16.322941  571407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 13:57:16.356357  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:57:16.364146  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:57:16.372066  571407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:16.375542  571407 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:16.375580  571407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:16.411251  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:57:16.419362  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 13:57:16.427222  571407 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 13:57:16.430760  571407 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 13:57:16.430801  571407 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 13:57:16.465249  571407 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 13:57:16.473859  571407 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:57:16.477268  571407 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:57:16.477337  571407 kubeadm.go:401] StartCluster: {Name:old-k8s-version-551674 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-551674 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:57:16.477426  571407 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:57:16.477483  571407 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:57:16.506759  571407 cri.go:89] found id: ""
	I1124 13:57:16.506814  571407 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:57:16.515866  571407 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:57:16.523775  571407 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:57:16.523826  571407 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:57:16.532502  571407 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:57:16.532522  571407 kubeadm.go:158] found existing configuration files:
	
	I1124 13:57:16.532570  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:57:16.540467  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:57:16.540518  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:57:16.548607  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:57:16.556445  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:57:16.556494  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:57:16.563968  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:57:16.572359  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:57:16.572430  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:57:16.580125  571407 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:57:16.588611  571407 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:57:16.588658  571407 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:57:16.596399  571407 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:57:16.666278  571407 kubeadm.go:319] [init] Using Kubernetes version: v1.28.0
	I1124 13:57:16.666356  571407 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:57:16.712711  571407 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:57:16.712838  571407 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:57:16.712920  571407 kubeadm.go:319] OS: Linux
	I1124 13:57:16.712997  571407 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:57:16.713075  571407 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:57:16.713159  571407 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:57:16.713238  571407 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:57:16.713312  571407 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:57:16.713385  571407 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:57:16.713468  571407 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:57:16.713532  571407 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:57:16.802858  571407 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:57:16.803038  571407 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:57:16.803167  571407 kubeadm.go:319] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I1124 13:57:16.980343  571407 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:57:16.983596  571407 out.go:252]   - Generating certificates and keys ...
	I1124 13:57:16.983725  571407 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:57:16.983863  571407 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:57:17.243683  571407 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:57:17.508216  571407 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:57:17.737530  571407 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:57:17.797058  571407 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:57:17.933081  571407 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:57:17.933277  571407 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-551674] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:57:17.987172  571407 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:57:17.987378  571407 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-551674] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1124 13:57:18.257965  571407 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:57:18.503087  571407 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:57:18.590801  571407 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:57:18.590928  571407 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:57:18.733783  571407 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:57:18.979974  571407 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:57:19.160310  571407 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:57:19.303006  571407 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:57:19.303797  571407 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:57:19.311391  571407 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:57:15.291776  573633 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1124 13:57:15.291833  573633 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.291899  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:15.338903  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.395563  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.434242  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:15.470602  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I1124 13:57:15.470716  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:57:15.479881  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1124 13:57:15.479980  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1124 13:57:15.500126  573633 crio.go:275] Loading image: /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:57:15.500187  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1
	I1124 13:57:15.571122  573633 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1124 13:57:17.473564  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.12.1: (1.973351982s)
	I1124 13:57:17.473601  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 from cache
	I1124 13:57:17.473621  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:57:17.473666  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1
	I1124 13:57:17.473690  573633 ssh_runner.go:235] Completed: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1: (1.902531252s)
	I1124 13:57:17.473744  573633 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1124 13:57:17.473779  573633 cri.go:218] Removing image: registry.k8s.io/pause:3.10.1
	I1124 13:57:17.473824  573633 ssh_runner.go:195] Run: which crictl
	I1124 13:57:18.583660  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.34.1: (1.109964125s)
	I1124 13:57:18.583700  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 from cache
	I1124 13:57:18.583701  573633 ssh_runner.go:235] Completed: which crictl: (1.109851679s)
	I1124 13:57:18.583727  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:57:18.583768  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:57:18.583779  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1
	I1124 13:57:19.749731  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.34.1: (1.165923358s)
	I1124 13:57:19.749759  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 from cache
	I1124 13:57:19.749760  573633 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.165962721s)
	I1124 13:57:19.749784  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:57:19.749823  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:57:19.749828  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1
	I1124 13:57:16.375925  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:16.792799  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:33976->192.168.76.2:8443: read: connection reset by peer
	I1124 13:57:16.792871  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:16.792984  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:16.829616  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:16.829644  549693 cri.go:89] found id: "281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3"
	I1124 13:57:16.829651  549693 cri.go:89] found id: ""
	I1124 13:57:16.829661  549693 logs.go:282] 2 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073 281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3]
	I1124 13:57:16.829719  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:16.834625  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:16.838722  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:16.838805  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:16.871036  549693 cri.go:89] found id: ""
	I1124 13:57:16.871065  549693 logs.go:282] 0 containers: []
	W1124 13:57:16.871076  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:16.871084  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:16.871143  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:16.901221  549693 cri.go:89] found id: ""
	I1124 13:57:16.901254  549693 logs.go:282] 0 containers: []
	W1124 13:57:16.901266  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:16.901274  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:16.901340  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:16.935298  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:16.935332  549693 cri.go:89] found id: ""
	I1124 13:57:16.935344  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:16.935578  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:16.940815  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:16.940883  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:16.972627  549693 cri.go:89] found id: ""
	I1124 13:57:16.972656  549693 logs.go:282] 0 containers: []
	W1124 13:57:16.972668  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:16.972676  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:16.972742  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:17.003837  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:17.003860  549693 cri.go:89] found id: "1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9"
	I1124 13:57:17.003866  549693 cri.go:89] found id: ""
	I1124 13:57:17.003877  549693 logs.go:282] 2 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121 1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9]
	I1124 13:57:17.003953  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:17.008578  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:17.012343  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:17.012403  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:17.038718  549693 cri.go:89] found id: ""
	I1124 13:57:17.038739  549693 logs.go:282] 0 containers: []
	W1124 13:57:17.038748  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:17.038755  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:17.038803  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:17.065817  549693 cri.go:89] found id: ""
	I1124 13:57:17.065838  549693 logs.go:282] 0 containers: []
	W1124 13:57:17.065848  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:17.065865  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:17.065878  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:17.098390  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:17.098421  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:17.169632  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:17.169672  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:17.190253  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:17.190286  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:17.252467  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:17.252489  549693 logs.go:123] Gathering logs for kube-apiserver [281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3] ...
	I1124 13:57:17.252506  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 281b403f5869d6fd99f64af54bb1a111f4065c8ae8df6063d003eed1dc0818d3"
	I1124 13:57:17.287708  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:17.287753  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:17.334910  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:17.334943  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:17.373257  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:17.373306  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:17.436437  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:17.436472  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:17.463545  549693 logs.go:123] Gathering logs for kube-controller-manager [1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9] ...
	I1124 13:57:17.463571  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 1ccaf986d410e90f1733304d0ae319bacab43b9203872fcd4f8ebea4a60b66f9"
	I1124 13:57:19.996015  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:19.996443  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:19.996503  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:19.996557  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:20.024690  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:20.024711  549693 cri.go:89] found id: ""
	I1124 13:57:20.024721  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:20.024773  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:20.028789  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:20.028848  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:20.054154  549693 cri.go:89] found id: ""
	I1124 13:57:20.054181  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.054192  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:20.054200  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:20.054241  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:20.079287  549693 cri.go:89] found id: ""
	I1124 13:57:20.079313  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.079325  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:20.079332  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:20.079376  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:20.105401  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:20.105423  549693 cri.go:89] found id: ""
	I1124 13:57:20.105432  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:20.105487  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:20.109416  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:20.109467  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:20.135667  549693 cri.go:89] found id: ""
	I1124 13:57:20.135694  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.135704  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:20.135711  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:20.135763  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:20.162305  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:20.162327  549693 cri.go:89] found id: ""
	I1124 13:57:20.162337  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:20.162392  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:20.166315  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:20.166375  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:20.191606  549693 cri.go:89] found id: ""
	I1124 13:57:20.191629  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.191639  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:20.191646  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:20.191703  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:20.218680  549693 cri.go:89] found id: ""
	I1124 13:57:20.218708  549693 logs.go:282] 0 containers: []
	W1124 13:57:20.218718  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:20.218730  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:20.218743  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:19.312627  571407 out.go:252]   - Booting up control plane ...
	I1124 13:57:19.312753  571407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:57:19.312868  571407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:57:19.313521  571407 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:57:19.329510  571407 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:57:19.330569  571407 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:57:19.330625  571407 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:57:19.439952  571407 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I1124 13:57:23.942581  571407 kubeadm.go:319] [apiclient] All control plane components are healthy after 4.502775 seconds
	I1124 13:57:23.942780  571407 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:57:23.953909  571407 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:57:24.473800  571407 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:57:24.474103  571407 kubeadm.go:319] [mark-control-plane] Marking the node old-k8s-version-551674 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:57:24.984809  571407 kubeadm.go:319] [bootstrap-token] Using token: ys6b1a.2xnctodtlxr4cy0e
	I1124 13:57:21.241643  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.34.1: (1.491787935s)
	I1124 13:57:21.241680  573633 ssh_runner.go:235] Completed: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1: (1.491822521s)
	I1124 13:57:21.241776  573633 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl rmi registry.k8s.io/pause:3.10.1
	I1124 13:57:21.241687  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 from cache
	I1124 13:57:21.241883  573633 crio.go:275] Loading image: /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:57:21.241932  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1
	I1124 13:57:21.274551  573633 cache_images.go:291] Loading image from: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1
	I1124 13:57:21.274653  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1124 13:57:22.542362  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.34.1: (1.300401s)
	I1124 13:57:22.542392  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 from cache
	I1124 13:57:22.542414  573633 crio.go:275] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:57:22.542456  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I1124 13:57:22.542524  573633 ssh_runner.go:235] Completed: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: (1.267857434s)
	I1124 13:57:22.542541  573633 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1124 13:57:22.542555  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1124 13:57:23.087121  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I1124 13:57:23.087168  573633 crio.go:275] Loading image: /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:57:23.087217  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0
	I1124 13:57:20.263486  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:20.263522  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:20.296182  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:20.296210  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:20.382239  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:20.382276  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:20.399521  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:20.399550  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:20.467474  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:20.467493  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:20.467506  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:20.503293  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:20.503324  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:20.551739  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:20.551776  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:23.080967  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:23.081424  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:23.081482  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:23.081526  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:23.108986  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:23.109013  549693 cri.go:89] found id: ""
	I1124 13:57:23.109024  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:23.109082  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:23.113005  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:23.113071  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:23.141535  549693 cri.go:89] found id: ""
	I1124 13:57:23.141567  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.141577  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:23.141585  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:23.141645  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:23.168572  549693 cri.go:89] found id: ""
	I1124 13:57:23.168599  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.168610  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:23.168618  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:23.168680  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:23.196831  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:23.196857  549693 cri.go:89] found id: ""
	I1124 13:57:23.196868  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:23.196938  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:23.200811  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:23.200872  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:23.229863  549693 cri.go:89] found id: ""
	I1124 13:57:23.229915  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.229926  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:23.229937  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:23.229995  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:23.259650  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:23.259669  549693 cri.go:89] found id: ""
	I1124 13:57:23.259679  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:23.259735  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:23.263487  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:23.263539  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:23.289669  549693 cri.go:89] found id: ""
	I1124 13:57:23.289693  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.289706  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:23.289713  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:23.289754  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:23.315479  549693 cri.go:89] found id: ""
	I1124 13:57:23.315503  549693 logs.go:282] 0 containers: []
	W1124 13:57:23.315512  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:23.315524  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:23.315541  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:23.344882  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:23.344923  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:23.416039  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:23.416065  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:23.432805  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:23.432837  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:23.502478  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:23.502506  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:23.502524  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:23.543728  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:23.543766  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:23.600218  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:23.600255  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:23.633238  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:23.633273  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:24.986140  571407 out.go:252]   - Configuring RBAC rules ...
	I1124 13:57:24.986305  571407 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:57:24.990271  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:57:24.996326  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:57:25.003126  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:57:25.005633  571407 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:57:25.008373  571407 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:57:25.018954  571407 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:57:25.212181  571407 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:57:25.393851  571407 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:57:25.395251  571407 kubeadm.go:319] 
	I1124 13:57:25.395372  571407 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:57:25.395395  571407 kubeadm.go:319] 
	I1124 13:57:25.395546  571407 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:57:25.395564  571407 kubeadm.go:319] 
	I1124 13:57:25.395608  571407 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:57:25.395707  571407 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:57:25.395801  571407 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:57:25.395819  571407 kubeadm.go:319] 
	I1124 13:57:25.395922  571407 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:57:25.395932  571407 kubeadm.go:319] 
	I1124 13:57:25.396002  571407 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:57:25.396011  571407 kubeadm.go:319] 
	I1124 13:57:25.396083  571407 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:57:25.396202  571407 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:57:25.396309  571407 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:57:25.396319  571407 kubeadm.go:319] 
	I1124 13:57:25.396441  571407 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:57:25.396559  571407 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:57:25.396589  571407 kubeadm.go:319] 
	I1124 13:57:25.396707  571407 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token ys6b1a.2xnctodtlxr4cy0e \
	I1124 13:57:25.396853  571407 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:57:25.396901  571407 kubeadm.go:319] 	--control-plane 
	I1124 13:57:25.396918  571407 kubeadm.go:319] 
	I1124 13:57:25.397034  571407 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:57:25.397044  571407 kubeadm.go:319] 
	I1124 13:57:25.397153  571407 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token ys6b1a.2xnctodtlxr4cy0e \
	I1124 13:57:25.397277  571407 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:57:25.399456  571407 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:57:25.399592  571407 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:57:25.399632  571407 cni.go:84] Creating CNI manager for ""
	I1124 13:57:25.399650  571407 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:25.401778  571407 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:57:25.402966  571407 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:57:25.407483  571407 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.0/kubectl ...
	I1124 13:57:25.407502  571407 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:57:25.421315  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:57:26.677665  571407 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.256314192s)
	I1124 13:57:26.677717  571407 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:57:26.677802  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:26.678026  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes old-k8s-version-551674 minikube.k8s.io/updated_at=2025_11_24T13_57_26_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=old-k8s-version-551674 minikube.k8s.io/primary=true
	I1124 13:57:26.764260  571407 ops.go:34] apiserver oom_adj: -16
	I1124 13:57:26.764291  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:27.264377  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:27.764385  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:26.800759  573633 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.6.4-0: (3.713511269s)
	I1124 13:57:26.800796  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 from cache
	I1124 13:57:26.800822  573633 crio.go:275] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1124 13:57:26.800864  573633 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.10.1
	I1124 13:57:26.915336  573633 cache_images.go:323] Transferred and loaded /home/jenkins/minikube-integration/21932-348000/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 from cache
	I1124 13:57:26.915375  573633 cache_images.go:125] Successfully loaded all cached images
	I1124 13:57:26.915380  573633 cache_images.go:94] duration metric: took 12.176501438s to LoadCachedImages
	I1124 13:57:26.915392  573633 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1124 13:57:26.915482  573633 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=no-preload-495729 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:57:26.915548  573633 ssh_runner.go:195] Run: crio config
	I1124 13:57:26.961598  573633 cni.go:84] Creating CNI manager for ""
	I1124 13:57:26.961625  573633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:26.961644  573633 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:57:26.961673  573633 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-495729 NodeName:no-preload-495729 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/e
tc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:57:26.961822  573633 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-495729"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:57:26.961912  573633 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:57:26.970397  573633 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.34.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.34.1': No such file or directory
	
	Initiating transfer...
	I1124 13:57:26.970452  573633 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.34.1
	I1124 13:57:26.978753  573633 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
	I1124 13:57:26.978848  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl
	I1124 13:57:26.978857  573633 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubeadm
	I1124 13:57:26.978903  573633 download.go:108] Downloading: https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubelet
	I1124 13:57:26.982826  573633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubectl': No such file or directory
	I1124 13:57:26.982850  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubectl --> /var/lib/minikube/binaries/v1.34.1/kubectl (60559544 bytes)
	I1124 13:57:27.766717  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:57:27.780138  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet
	I1124 13:57:27.784047  573633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubelet': No such file or directory
	I1124 13:57:27.784079  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubelet --> /var/lib/minikube/binaries/v1.34.1/kubelet (59195684 bytes)
	I1124 13:57:27.822694  573633 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm
	I1124 13:57:27.830525  573633 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.34.1/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.34.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.34.1/kubeadm': No such file or directory
	I1124 13:57:27.830561  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v1.34.1/kubeadm --> /var/lib/minikube/binaries/v1.34.1/kubeadm (74027192 bytes)
	I1124 13:57:28.094598  573633 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:57:28.102871  573633 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 13:57:28.115287  573633 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:57:28.129863  573633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2216 bytes)
	I1124 13:57:28.142982  573633 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:57:28.146672  573633 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:57:28.155948  573633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:28.236448  573633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:28.260867  573633 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729 for IP: 192.168.103.2
	I1124 13:57:28.260918  573633 certs.go:195] generating shared ca certs ...
	I1124 13:57:28.260936  573633 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.261108  573633 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:57:28.261162  573633 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:57:28.261173  573633 certs.go:257] generating profile certs ...
	I1124 13:57:28.261225  573633 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.key
	I1124 13:57:28.261239  573633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.crt with IP's: []
	I1124 13:57:28.400253  573633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.crt ...
	I1124 13:57:28.400279  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.crt: {Name:mk1bccb90b80822e2b694d0e1d16f81c17491caa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.400448  573633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.key ...
	I1124 13:57:28.400461  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/client.key: {Name:mke310e1ee824c765c4c6b1434da5b7bb54684f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.400549  573633 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8
	I1124 13:57:28.400564  573633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1124 13:57:28.444920  573633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8 ...
	I1124 13:57:28.444940  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8: {Name:mkbd69e0ecab03baf64997b662fa9aff127b2c25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.445058  573633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8 ...
	I1124 13:57:28.445072  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8: {Name:mkaddef9aed1936bc049b484899750225c43f048 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.445145  573633 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt.e15203c8 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt
	I1124 13:57:28.445227  573633 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key.e15203c8 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key
	I1124 13:57:28.445287  573633 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key
	I1124 13:57:28.445301  573633 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt with IP's: []
	I1124 13:57:28.702286  573633 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt ...
	I1124 13:57:28.702311  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt: {Name:mk851290cc76e1a7a35547c1a0c59d85e9313498 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.702456  573633 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key ...
	I1124 13:57:28.702469  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key: {Name:mk8a9c376fd5d4087cccdd45da4782aa62060990 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:28.702668  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 13:57:28.702713  573633 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 13:57:28.702734  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:57:28.702762  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:57:28.702785  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:57:28.702807  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:57:28.702851  573633 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:57:28.703492  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:57:28.721561  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:57:28.738217  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:57:28.755118  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:57:28.772487  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 13:57:28.790232  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:57:28.806994  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:57:28.825336  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/no-preload-495729/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:57:28.842370  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 13:57:28.861567  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 13:57:28.878084  573633 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:57:28.894539  573633 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:57:28.906069  573633 ssh_runner.go:195] Run: openssl version
	I1124 13:57:28.912015  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:57:28.920046  573633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:28.923491  573633 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:28.923542  573633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:57:28.958031  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:57:28.966087  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 13:57:28.974753  573633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 13:57:28.978927  573633 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 13:57:28.978990  573633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 13:57:29.016023  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 13:57:29.024271  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 13:57:29.032274  573633 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 13:57:29.036019  573633 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 13:57:29.036062  573633 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 13:57:29.070777  573633 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:57:29.078641  573633 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:57:29.082085  573633 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:57:29.082142  573633 kubeadm.go:401] StartCluster: {Name:no-preload-495729 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-495729 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:57:29.082213  573633 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:57:29.082248  573633 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:57:29.107771  573633 cri.go:89] found id: ""
	I1124 13:57:29.107827  573633 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:57:29.115381  573633 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:57:29.122978  573633 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:57:29.123027  573633 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:57:29.130358  573633 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:57:29.130375  573633 kubeadm.go:158] found existing configuration files:
	
	I1124 13:57:29.130410  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:57:29.137885  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:57:29.137951  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:57:29.145044  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:57:29.152398  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:57:29.152440  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:57:29.159490  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:57:29.166626  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:57:29.166660  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:57:29.173444  573633 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:57:29.180625  573633 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:57:29.180661  573633 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:57:29.187509  573633 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:57:29.220910  573633 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:57:29.221011  573633 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:57:29.240437  573633 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:57:29.240505  573633 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:57:29.240546  573633 kubeadm.go:319] OS: Linux
	I1124 13:57:29.240627  573633 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:57:29.240721  573633 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:57:29.240783  573633 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:57:29.240860  573633 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:57:29.240945  573633 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:57:29.241022  573633 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:57:29.241095  573633 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:57:29.241156  573633 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:57:29.300735  573633 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:57:29.300861  573633 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:57:29.301006  573633 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:57:29.316241  573633 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:57:29.318709  573633 out.go:252]   - Generating certificates and keys ...
	I1124 13:57:29.318816  573633 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:57:29.318959  573633 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:57:29.467801  573633 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:57:29.926743  573633 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 13:57:30.102501  573633 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:57:26.184991  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:26.185401  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:26.185464  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:26.185517  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:26.212660  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:26.212681  549693 cri.go:89] found id: ""
	I1124 13:57:26.212690  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:26.212744  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:26.216615  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:26.216674  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:26.243277  549693 cri.go:89] found id: ""
	I1124 13:57:26.243305  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.243313  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:26.243320  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:26.243381  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:26.270037  549693 cri.go:89] found id: ""
	I1124 13:57:26.270061  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.270071  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:26.270078  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:26.270135  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:26.296960  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:26.296995  549693 cri.go:89] found id: ""
	I1124 13:57:26.297007  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:26.297070  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:26.301134  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:26.301198  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:26.330601  549693 cri.go:89] found id: ""
	I1124 13:57:26.330626  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.330634  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:26.330640  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:26.330701  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:26.355988  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:26.356012  549693 cri.go:89] found id: ""
	I1124 13:57:26.356023  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:26.356072  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:26.360027  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:26.360089  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:26.386931  549693 cri.go:89] found id: ""
	I1124 13:57:26.386961  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.386970  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:26.386980  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:26.387037  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:26.413206  549693 cri.go:89] found id: ""
	I1124 13:57:26.413234  549693 logs.go:282] 0 containers: []
	W1124 13:57:26.413246  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:26.413260  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:26.413279  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:26.458907  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:26.458939  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:26.484744  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:26.484773  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:26.528348  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:26.528379  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:26.558726  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:26.558753  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:26.630322  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:26.630353  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:26.646849  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:26.646872  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:26.728844  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:26.728867  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:26.728883  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:29.271035  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:29.271409  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:29.271465  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:29.271508  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:29.304860  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:29.304882  549693 cri.go:89] found id: ""
	I1124 13:57:29.304903  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:29.304961  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:29.309305  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:29.309368  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:29.339516  549693 cri.go:89] found id: ""
	I1124 13:57:29.339540  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.339550  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:29.339557  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:29.339620  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:29.365924  549693 cri.go:89] found id: ""
	I1124 13:57:29.365950  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.365960  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:29.365969  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:29.366026  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:29.393209  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:29.393230  549693 cri.go:89] found id: ""
	I1124 13:57:29.393241  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:29.393284  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:29.397084  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:29.397141  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:29.421881  549693 cri.go:89] found id: ""
	I1124 13:57:29.421941  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.421950  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:29.421963  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:29.422016  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:29.446504  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:29.446521  549693 cri.go:89] found id: ""
	I1124 13:57:29.446531  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:29.446579  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:29.450356  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:29.450407  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:29.476041  549693 cri.go:89] found id: ""
	I1124 13:57:29.476064  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.476074  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:29.476081  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:29.476130  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:29.502720  549693 cri.go:89] found id: ""
	I1124 13:57:29.502744  549693 logs.go:282] 0 containers: []
	W1124 13:57:29.502754  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:29.502765  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:29.502779  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:29.556575  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:29.556597  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:29.556613  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:29.590498  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:29.590527  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:29.633876  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:29.633912  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:29.658534  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:29.658558  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:29.699288  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:29.699315  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:29.728940  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:29.728970  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:29.810491  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:29.810520  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:28.265193  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:28.764415  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:29.265156  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:29.765163  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:30.264394  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:30.765058  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:31.264584  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:31.764564  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:32.264921  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:32.764796  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:30.379570  573633 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:57:31.111350  573633 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:57:31.111560  573633 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-495729] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:57:31.266158  573633 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:57:31.266353  573633 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-495729] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1124 13:57:31.686144  573633 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:57:31.923523  573633 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:57:32.185038  573633 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:57:32.185110  573633 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:57:32.528464  573633 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:57:33.073112  573633 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:57:33.168005  573633 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:57:33.598124  573633 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:57:33.690558  573633 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:57:33.691134  573633 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:57:33.694570  573633 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 13:57:33.696063  573633 out.go:252]   - Booting up control plane ...
	I1124 13:57:33.696177  573633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 13:57:33.696280  573633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 13:57:33.696945  573633 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 13:57:33.710532  573633 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 13:57:33.710620  573633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 13:57:33.716564  573633 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 13:57:33.716899  573633 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 13:57:33.716980  573633 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 13:57:33.819935  573633 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 13:57:33.820045  573633 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 13:57:34.821074  573633 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001191289s
	I1124 13:57:34.824226  573633 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 13:57:34.824378  573633 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1124 13:57:34.824498  573633 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 13:57:34.824577  573633 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 13:57:32.331957  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:32.332463  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:32.332529  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:32.332587  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:32.360218  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:32.360238  549693 cri.go:89] found id: ""
	I1124 13:57:32.360246  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:32.360297  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:32.364109  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:32.364160  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:32.389549  549693 cri.go:89] found id: ""
	I1124 13:57:32.389572  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.389579  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:32.389585  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:32.389635  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:32.414359  549693 cri.go:89] found id: ""
	I1124 13:57:32.414383  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.414393  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:32.414401  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:32.414462  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:32.440008  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:32.440036  549693 cri.go:89] found id: ""
	I1124 13:57:32.440045  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:32.440097  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:32.443872  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:32.443941  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:32.469401  549693 cri.go:89] found id: ""
	I1124 13:57:32.469424  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.469434  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:32.469442  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:32.469496  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:32.496809  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:32.496832  549693 cri.go:89] found id: ""
	I1124 13:57:32.496842  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:32.496906  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:32.500527  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:32.500585  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:32.527346  549693 cri.go:89] found id: ""
	I1124 13:57:32.527369  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.527378  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:32.527385  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:32.527451  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:32.553285  549693 cri.go:89] found id: ""
	I1124 13:57:32.553309  549693 logs.go:282] 0 containers: []
	W1124 13:57:32.553319  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:32.553331  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:32.553348  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:32.577411  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:32.577432  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:32.630224  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:32.630257  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:32.660133  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:32.660162  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:32.739270  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:32.739307  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:32.757046  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:32.757070  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:32.823854  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:32.823873  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:32.823903  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:32.858596  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:32.858646  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:33.265260  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:33.764600  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:34.265353  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:34.764610  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:35.265350  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:35.765089  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:36.264652  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:36.765004  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:37.264420  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:37.764671  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:38.264876  571407 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:38.356087  571407 kubeadm.go:1114] duration metric: took 11.678328604s to wait for elevateKubeSystemPrivileges
	I1124 13:57:38.356138  571407 kubeadm.go:403] duration metric: took 21.878803001s to StartCluster
	I1124 13:57:38.356163  571407 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:38.356246  571407 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:38.357783  571407 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:38.358051  571407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:57:38.358088  571407 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:57:38.358147  571407 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:57:38.358255  571407 addons.go:70] Setting storage-provisioner=true in profile "old-k8s-version-551674"
	I1124 13:57:38.358277  571407 addons.go:239] Setting addon storage-provisioner=true in "old-k8s-version-551674"
	I1124 13:57:38.358300  571407 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:57:38.358316  571407 host.go:66] Checking if "old-k8s-version-551674" exists ...
	I1124 13:57:38.358356  571407 addons.go:70] Setting default-storageclass=true in profile "old-k8s-version-551674"
	I1124 13:57:38.358376  571407 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-551674"
	I1124 13:57:38.358846  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:38.359002  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:38.359747  571407 out.go:179] * Verifying Kubernetes components...
	I1124 13:57:38.361522  571407 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:38.391918  571407 addons.go:239] Setting addon default-storageclass=true in "old-k8s-version-551674"
	I1124 13:57:38.391968  571407 host.go:66] Checking if "old-k8s-version-551674" exists ...
	I1124 13:57:38.392563  571407 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:57:38.394763  571407 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:36.451226  573633 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.626840434s
	I1124 13:57:36.960539  573633 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.135640493s
	I1124 13:57:38.826634  573633 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.002373914s
	I1124 13:57:38.840986  573633 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 13:57:38.851423  573633 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 13:57:38.860461  573633 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 13:57:38.860750  573633 kubeadm.go:319] [mark-control-plane] Marking the node no-preload-495729 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 13:57:38.868251  573633 kubeadm.go:319] [bootstrap-token] Using token: 48ihnp.vwtbijadec283ifs
	I1124 13:57:38.396071  571407 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:38.396092  571407 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:57:38.396150  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:38.418200  571407 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:38.418287  571407 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:57:38.418389  571407 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:57:38.427148  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:38.452725  571407 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33428 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:57:38.477975  571407 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:57:38.557120  571407 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:38.568275  571407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:38.580397  571407 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:38.734499  571407 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:57:38.735724  571407 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-551674" to be "Ready" ...
	I1124 13:57:38.974952  571407 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:57:38.869902  573633 out.go:252]   - Configuring RBAC rules ...
	I1124 13:57:38.870039  573633 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:57:38.873723  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:57:38.878666  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:57:38.881648  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:57:38.884769  573633 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:57:38.889885  573633 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:57:39.234810  573633 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:57:39.655817  573633 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:57:35.405030  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:35.405441  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:35.405500  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:35.405562  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:35.436526  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:35.436546  549693 cri.go:89] found id: ""
	I1124 13:57:35.436556  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:35.436606  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:35.440553  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:35.440627  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:35.469691  549693 cri.go:89] found id: ""
	I1124 13:57:35.469714  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.469724  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:35.469731  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:35.469778  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:35.498349  549693 cri.go:89] found id: ""
	I1124 13:57:35.498374  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.498384  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:35.498392  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:35.498445  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:35.524590  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:35.524611  549693 cri.go:89] found id: ""
	I1124 13:57:35.524621  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:35.524672  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:35.529028  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:35.529079  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:35.559998  549693 cri.go:89] found id: ""
	I1124 13:57:35.560022  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.560032  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:35.560039  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:35.560088  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:35.589880  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:35.589924  549693 cri.go:89] found id: ""
	I1124 13:57:35.589935  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:35.589988  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:35.593704  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:35.593762  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:35.618198  549693 cri.go:89] found id: ""
	I1124 13:57:35.618221  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.618231  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:35.618238  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:35.618287  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:35.644239  549693 cri.go:89] found id: ""
	I1124 13:57:35.644261  549693 logs.go:282] 0 containers: []
	W1124 13:57:35.644271  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:35.644283  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:35.644296  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:35.704869  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:35.704905  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:35.734591  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:35.734619  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:35.851103  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:35.851135  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:35.868937  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:35.868962  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:35.941457  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:35.941484  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:35.941500  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:35.982863  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:35.982912  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:36.041059  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:36.041094  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:38.575953  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:38.576325  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:38.576395  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:38.576458  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:38.609454  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:38.609479  549693 cri.go:89] found id: ""
	I1124 13:57:38.609490  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:38.609558  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:38.614057  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:38.614122  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:38.653884  549693 cri.go:89] found id: ""
	I1124 13:57:38.653944  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.653957  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:38.653965  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:38.654177  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:38.694950  549693 cri.go:89] found id: ""
	I1124 13:57:38.694982  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.694992  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:38.695000  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:38.695073  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:38.730951  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:38.731957  549693 cri.go:89] found id: ""
	I1124 13:57:38.731971  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:38.732043  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:38.737061  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:38.737131  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:38.772509  549693 cri.go:89] found id: ""
	I1124 13:57:38.772539  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.772552  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:38.772560  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:38.772620  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:38.807273  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:38.807296  549693 cri.go:89] found id: ""
	I1124 13:57:38.807306  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:38.807364  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:38.811473  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:38.811539  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:38.840830  549693 cri.go:89] found id: ""
	I1124 13:57:38.840858  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.840869  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:38.840878  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:38.840960  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:38.874818  549693 cri.go:89] found id: ""
	I1124 13:57:38.874843  549693 logs.go:282] 0 containers: []
	W1124 13:57:38.874853  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:38.874866  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:38.874882  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:38.898369  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:38.898408  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:38.967437  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:38.967473  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:38.967491  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:39.001624  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:39.001656  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:39.051991  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:39.052020  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:39.079565  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:39.079589  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:39.133518  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:39.133552  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:39.171263  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:39.171297  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:40.232134  573633 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:57:40.233041  573633 kubeadm.go:319] 
	I1124 13:57:40.233131  573633 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:57:40.233139  573633 kubeadm.go:319] 
	I1124 13:57:40.233225  573633 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:57:40.233235  573633 kubeadm.go:319] 
	I1124 13:57:40.233261  573633 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:57:40.233393  573633 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:57:40.233486  573633 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:57:40.233505  573633 kubeadm.go:319] 
	I1124 13:57:40.233585  573633 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:57:40.233594  573633 kubeadm.go:319] 
	I1124 13:57:40.233688  573633 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:57:40.233698  573633 kubeadm.go:319] 
	I1124 13:57:40.233785  573633 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:57:40.233930  573633 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:57:40.234051  573633 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:57:40.234059  573633 kubeadm.go:319] 
	I1124 13:57:40.234181  573633 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:57:40.234294  573633 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:57:40.234303  573633 kubeadm.go:319] 
	I1124 13:57:40.234416  573633 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token 48ihnp.vwtbijadec283ifs \
	I1124 13:57:40.234583  573633 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:57:40.234632  573633 kubeadm.go:319] 	--control-plane 
	I1124 13:57:40.234642  573633 kubeadm.go:319] 
	I1124 13:57:40.234762  573633 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:57:40.234772  573633 kubeadm.go:319] 
	I1124 13:57:40.234912  573633 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token 48ihnp.vwtbijadec283ifs \
	I1124 13:57:40.235064  573633 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:57:40.236690  573633 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:57:40.236874  573633 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:57:40.236913  573633 cni.go:84] Creating CNI manager for ""
	I1124 13:57:40.236923  573633 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:57:40.238422  573633 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:57:38.976426  571407 addons.go:530] duration metric: took 618.270366ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:57:39.240477  571407 kapi.go:214] "coredns" deployment in "kube-system" namespace and "old-k8s-version-551674" context rescaled to 1 replicas
	W1124 13:57:40.738964  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	W1124 13:57:42.739326  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	I1124 13:57:40.239630  573633 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:57:40.244652  573633 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:57:40.244672  573633 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:57:40.258072  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:57:40.463145  573633 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:57:40.463221  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:40.463229  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes no-preload-495729 minikube.k8s.io/updated_at=2025_11_24T13_57_40_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=no-preload-495729 minikube.k8s.io/primary=true
	I1124 13:57:40.546615  573633 ops.go:34] apiserver oom_adj: -16
	I1124 13:57:40.546689  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:41.047068  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:41.547628  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:42.047090  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:42.547841  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:43.047723  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:43.547225  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:44.047166  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:44.546815  573633 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:57:44.613170  573633 kubeadm.go:1114] duration metric: took 4.150025246s to wait for elevateKubeSystemPrivileges
	I1124 13:57:44.613210  573633 kubeadm.go:403] duration metric: took 15.531076005s to StartCluster
	I1124 13:57:44.613229  573633 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:44.613290  573633 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:44.614488  573633 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:57:44.614707  573633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:57:44.614719  573633 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:57:44.614809  573633 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:57:44.614937  573633 addons.go:70] Setting storage-provisioner=true in profile "no-preload-495729"
	I1124 13:57:44.614961  573633 addons.go:239] Setting addon storage-provisioner=true in "no-preload-495729"
	I1124 13:57:44.614965  573633 addons.go:70] Setting default-storageclass=true in profile "no-preload-495729"
	I1124 13:57:44.615007  573633 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-495729"
	I1124 13:57:44.615020  573633 host.go:66] Checking if "no-preload-495729" exists ...
	I1124 13:57:44.614969  573633 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:44.615385  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:44.615544  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:44.616210  573633 out.go:179] * Verifying Kubernetes components...
	I1124 13:57:44.617567  573633 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:57:44.637044  573633 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:57:44.638487  573633 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:44.638507  573633 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:57:44.638569  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:44.638634  573633 addons.go:239] Setting addon default-storageclass=true in "no-preload-495729"
	I1124 13:57:44.638680  573633 host.go:66] Checking if "no-preload-495729" exists ...
	I1124 13:57:44.639172  573633 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:57:44.668307  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:44.671806  573633 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:44.671829  573633 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:57:44.671908  573633 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:57:44.694240  573633 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33433 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:57:44.703940  573633 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:57:44.764418  573633 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:57:44.788662  573633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:57:44.813707  573633 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:57:44.879458  573633 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1124 13:57:44.880723  573633 node_ready.go:35] waiting up to 6m0s for node "no-preload-495729" to be "Ready" ...
	I1124 13:57:45.096804  573633 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:57:45.098448  573633 addons.go:530] duration metric: took 483.641407ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:57:41.784356  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:41.784798  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:41.784856  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:41.784947  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:41.811621  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:41.811648  549693 cri.go:89] found id: ""
	I1124 13:57:41.811658  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:41.811704  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:41.815627  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:41.815685  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:41.842620  549693 cri.go:89] found id: ""
	I1124 13:57:41.842646  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.842657  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:41.842671  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:41.842723  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:41.867627  549693 cri.go:89] found id: ""
	I1124 13:57:41.867653  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.867663  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:41.867670  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:41.867720  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:41.892754  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:41.892774  549693 cri.go:89] found id: ""
	I1124 13:57:41.892784  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:41.892833  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:41.896560  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:41.896627  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:41.921407  549693 cri.go:89] found id: ""
	I1124 13:57:41.921427  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.921434  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:41.921440  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:41.921485  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:41.947566  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:41.947586  549693 cri.go:89] found id: ""
	I1124 13:57:41.947594  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:41.947645  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:41.951422  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:41.951474  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:41.975996  549693 cri.go:89] found id: ""
	I1124 13:57:41.976020  549693 logs.go:282] 0 containers: []
	W1124 13:57:41.976030  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:41.976037  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:41.976079  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:42.000752  549693 cri.go:89] found id: ""
	I1124 13:57:42.000777  549693 logs.go:282] 0 containers: []
	W1124 13:57:42.000787  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:42.000798  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:42.000809  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:42.016535  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:42.016557  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:42.071718  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:57:42.071744  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:42.071761  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:42.105106  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:42.105136  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:42.151526  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:42.151556  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:42.177057  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:42.177084  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:57:42.228928  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:42.228955  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:42.256638  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:42.256661  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:44.839181  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:57:44.839657  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:57:44.839724  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:57:44.839783  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:57:44.874512  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:44.874558  549693 cri.go:89] found id: ""
	I1124 13:57:44.874569  549693 logs.go:282] 1 containers: [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:57:44.874628  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:44.880817  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:57:44.880879  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:57:44.919086  549693 cri.go:89] found id: ""
	I1124 13:57:44.919116  549693 logs.go:282] 0 containers: []
	W1124 13:57:44.919127  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:57:44.919136  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:57:44.919192  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:57:44.953710  549693 cri.go:89] found id: ""
	I1124 13:57:44.953736  549693 logs.go:282] 0 containers: []
	W1124 13:57:44.953747  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:57:44.953756  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:57:44.953813  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:57:44.985405  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:44.985432  549693 cri.go:89] found id: ""
	I1124 13:57:44.985443  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:57:44.985500  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:44.989883  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:57:44.989990  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:57:45.019512  549693 cri.go:89] found id: ""
	I1124 13:57:45.019554  549693 logs.go:282] 0 containers: []
	W1124 13:57:45.019567  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:57:45.019575  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:57:45.019633  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:57:45.048774  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:45.048798  549693 cri.go:89] found id: ""
	I1124 13:57:45.048808  549693 logs.go:282] 1 containers: [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:57:45.048872  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:57:45.053561  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:57:45.053629  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:57:45.086436  549693 cri.go:89] found id: ""
	I1124 13:57:45.086467  549693 logs.go:282] 0 containers: []
	W1124 13:57:45.086479  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:57:45.086487  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:57:45.086560  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:57:45.119591  549693 cri.go:89] found id: ""
	I1124 13:57:45.119620  549693 logs.go:282] 0 containers: []
	W1124 13:57:45.119631  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:57:45.119644  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:57:45.119659  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:57:45.171180  549693 logs.go:123] Gathering logs for kube-controller-manager [196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121] ...
	I1124 13:57:45.171213  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:57:45.199707  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:57:45.199738  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	W1124 13:57:44.739528  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	W1124 13:57:47.239175  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	I1124 13:57:45.383105  573633 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-495729" context rescaled to 1 replicas
	W1124 13:57:46.884687  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	W1124 13:57:49.384056  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	I1124 13:57:45.250283  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:57:45.250315  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:57:45.279720  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:57:45.279745  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:57:45.360786  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:57:45.360817  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:57:45.378763  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:57:45.378798  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:57:49.738537  571407 node_ready.go:57] node "old-k8s-version-551674" has "Ready":"False" status (will retry)
	I1124 13:57:51.238731  571407 node_ready.go:49] node "old-k8s-version-551674" is "Ready"
	I1124 13:57:51.238764  571407 node_ready.go:38] duration metric: took 12.503011397s for node "old-k8s-version-551674" to be "Ready" ...
	I1124 13:57:51.238781  571407 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:57:51.238850  571407 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:57:51.254673  571407 api_server.go:72] duration metric: took 12.896544303s to wait for apiserver process to appear ...
	I1124 13:57:51.254695  571407 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:57:51.254714  571407 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 13:57:51.260272  571407 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 13:57:51.261359  571407 api_server.go:141] control plane version: v1.28.0
	I1124 13:57:51.261382  571407 api_server.go:131] duration metric: took 6.681811ms to wait for apiserver health ...
	I1124 13:57:51.261391  571407 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:57:51.265577  571407 system_pods.go:59] 8 kube-system pods found
	I1124 13:57:51.265622  571407 system_pods.go:61] "coredns-5dd5756b68-swk4w" [ea9c4e37-9d2c-4148-b9cf-1961e1e7923f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:51.265632  571407 system_pods.go:61] "etcd-old-k8s-version-551674" [d41f7874-4dae-4aca-a539-6cc85c0fd65f] Running
	I1124 13:57:51.265656  571407 system_pods.go:61] "kindnet-sz57p" [a75b53b9-cf49-47a0-8184-f678d2dd7fbb] Running
	I1124 13:57:51.265662  571407 system_pods.go:61] "kube-apiserver-old-k8s-version-551674" [bbf37aff-faf4-4a12-8f3e-c16a85518770] Running
	I1124 13:57:51.265672  571407 system_pods.go:61] "kube-controller-manager-old-k8s-version-551674" [5b5b619d-b395-4abd-91d6-0fac3b34542e] Running
	I1124 13:57:51.265677  571407 system_pods.go:61] "kube-proxy-trn2x" [0e1df93d-97cc-48c1-9a95-18cd7d3f1a38] Running
	I1124 13:57:51.265682  571407 system_pods.go:61] "kube-scheduler-old-k8s-version-551674" [63eede78-ef6a-44ab-adeb-18bd57e833db] Running
	I1124 13:57:51.265690  571407 system_pods.go:61] "storage-provisioner" [d77a52ec-4e20-4ade-a015-7e4a4ea5baae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:51.265697  571407 system_pods.go:74] duration metric: took 4.300315ms to wait for pod list to return data ...
	I1124 13:57:51.265706  571407 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:57:51.268241  571407 default_sa.go:45] found service account: "default"
	I1124 13:57:51.268262  571407 default_sa.go:55] duration metric: took 2.550382ms for default service account to be created ...
	I1124 13:57:51.268272  571407 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:57:51.272099  571407 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:51.272132  571407 system_pods.go:89] "coredns-5dd5756b68-swk4w" [ea9c4e37-9d2c-4148-b9cf-1961e1e7923f] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:51.272139  571407 system_pods.go:89] "etcd-old-k8s-version-551674" [d41f7874-4dae-4aca-a539-6cc85c0fd65f] Running
	I1124 13:57:51.272148  571407 system_pods.go:89] "kindnet-sz57p" [a75b53b9-cf49-47a0-8184-f678d2dd7fbb] Running
	I1124 13:57:51.272158  571407 system_pods.go:89] "kube-apiserver-old-k8s-version-551674" [bbf37aff-faf4-4a12-8f3e-c16a85518770] Running
	I1124 13:57:51.272165  571407 system_pods.go:89] "kube-controller-manager-old-k8s-version-551674" [5b5b619d-b395-4abd-91d6-0fac3b34542e] Running
	I1124 13:57:51.272171  571407 system_pods.go:89] "kube-proxy-trn2x" [0e1df93d-97cc-48c1-9a95-18cd7d3f1a38] Running
	I1124 13:57:51.272179  571407 system_pods.go:89] "kube-scheduler-old-k8s-version-551674" [63eede78-ef6a-44ab-adeb-18bd57e833db] Running
	I1124 13:57:51.272192  571407 system_pods.go:89] "storage-provisioner" [d77a52ec-4e20-4ade-a015-7e4a4ea5baae] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:51.272221  571407 retry.go:31] will retry after 250.594322ms: missing components: kube-dns
	I1124 13:57:51.527051  571407 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:51.527080  571407 system_pods.go:89] "coredns-5dd5756b68-swk4w" [ea9c4e37-9d2c-4148-b9cf-1961e1e7923f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:51.527086  571407 system_pods.go:89] "etcd-old-k8s-version-551674" [d41f7874-4dae-4aca-a539-6cc85c0fd65f] Running
	I1124 13:57:51.527092  571407 system_pods.go:89] "kindnet-sz57p" [a75b53b9-cf49-47a0-8184-f678d2dd7fbb] Running
	I1124 13:57:51.527095  571407 system_pods.go:89] "kube-apiserver-old-k8s-version-551674" [bbf37aff-faf4-4a12-8f3e-c16a85518770] Running
	I1124 13:57:51.527099  571407 system_pods.go:89] "kube-controller-manager-old-k8s-version-551674" [5b5b619d-b395-4abd-91d6-0fac3b34542e] Running
	I1124 13:57:51.527103  571407 system_pods.go:89] "kube-proxy-trn2x" [0e1df93d-97cc-48c1-9a95-18cd7d3f1a38] Running
	I1124 13:57:51.527106  571407 system_pods.go:89] "kube-scheduler-old-k8s-version-551674" [63eede78-ef6a-44ab-adeb-18bd57e833db] Running
	I1124 13:57:51.527109  571407 system_pods.go:89] "storage-provisioner" [d77a52ec-4e20-4ade-a015-7e4a4ea5baae] Running
	I1124 13:57:51.527122  571407 system_pods.go:126] duration metric: took 258.838925ms to wait for k8s-apps to be running ...
	I1124 13:57:51.527133  571407 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:57:51.527179  571407 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:57:51.540991  571407 system_svc.go:56] duration metric: took 13.84612ms WaitForService to wait for kubelet
	I1124 13:57:51.541021  571407 kubeadm.go:587] duration metric: took 13.182896831s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:57:51.541038  571407 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:57:51.543114  571407 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:57:51.543141  571407 node_conditions.go:123] node cpu capacity is 8
	I1124 13:57:51.543180  571407 node_conditions.go:105] duration metric: took 2.135733ms to run NodePressure ...
	I1124 13:57:51.543201  571407 start.go:242] waiting for startup goroutines ...
	I1124 13:57:51.543213  571407 start.go:247] waiting for cluster config update ...
	I1124 13:57:51.543229  571407 start.go:256] writing updated cluster config ...
	I1124 13:57:51.543556  571407 ssh_runner.go:195] Run: rm -f paused
	I1124 13:57:51.547507  571407 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:57:51.551627  571407 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-swk4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.557529  571407 pod_ready.go:94] pod "coredns-5dd5756b68-swk4w" is "Ready"
	I1124 13:57:52.557561  571407 pod_ready.go:86] duration metric: took 1.005905574s for pod "coredns-5dd5756b68-swk4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.560039  571407 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.563934  571407 pod_ready.go:94] pod "etcd-old-k8s-version-551674" is "Ready"
	I1124 13:57:52.563954  571407 pod_ready.go:86] duration metric: took 3.893315ms for pod "etcd-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.566100  571407 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.569851  571407 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-551674" is "Ready"
	I1124 13:57:52.569872  571407 pod_ready.go:86] duration metric: took 3.754642ms for pod "kube-apiserver-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.572231  571407 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.754579  571407 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-551674" is "Ready"
	I1124 13:57:52.754602  571407 pod_ready.go:86] duration metric: took 182.352439ms for pod "kube-controller-manager-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:52.955707  571407 pod_ready.go:83] waiting for pod "kube-proxy-trn2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.355024  571407 pod_ready.go:94] pod "kube-proxy-trn2x" is "Ready"
	I1124 13:57:53.355055  571407 pod_ready.go:86] duration metric: took 399.32483ms for pod "kube-proxy-trn2x" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.555122  571407 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.954422  571407 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-551674" is "Ready"
	I1124 13:57:53.954447  571407 pod_ready.go:86] duration metric: took 399.299345ms for pod "kube-scheduler-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:53.954459  571407 pod_ready.go:40] duration metric: took 2.406920823s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:57:53.998980  571407 start.go:625] kubectl: 1.34.2, cluster: 1.28.0 (minor skew: 6)
	I1124 13:57:54.000631  571407 out.go:203] 
	W1124 13:57:54.001877  571407 out.go:285] ! /usr/local/bin/kubectl is version 1.34.2, which may have incompatibilities with Kubernetes 1.28.0.
	I1124 13:57:54.003152  571407 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1124 13:57:54.004712  571407 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-551674" cluster and "default" namespace by default
	W1124 13:57:51.883563  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	W1124 13:57:53.884084  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	W1124 13:57:56.383941  573633 node_ready.go:57] node "no-preload-495729" has "Ready":"False" status (will retry)
	I1124 13:57:58.383396  573633 node_ready.go:49] node "no-preload-495729" is "Ready"
	I1124 13:57:58.383426  573633 node_ready.go:38] duration metric: took 13.502676917s for node "no-preload-495729" to be "Ready" ...
	I1124 13:57:58.383444  573633 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:57:58.383501  573633 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:57:58.395442  573633 api_server.go:72] duration metric: took 13.7806825s to wait for apiserver process to appear ...
	I1124 13:57:58.395467  573633 api_server.go:88] waiting for apiserver healthz status ...
	I1124 13:57:58.395493  573633 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1124 13:57:58.399257  573633 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1124 13:57:58.400109  573633 api_server.go:141] control plane version: v1.34.1
	I1124 13:57:58.400130  573633 api_server.go:131] duration metric: took 4.6575ms to wait for apiserver health ...
	I1124 13:57:58.400138  573633 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 13:57:58.402654  573633 system_pods.go:59] 8 kube-system pods found
	I1124 13:57:58.402688  573633 system_pods.go:61] "coredns-66bc5c9577-b7t2v" [cfd3642f-4fab-4d58-ac21-5c59c0820cb6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:58.402696  573633 system_pods.go:61] "etcd-no-preload-495729" [3c702450-6910-48ff-ab8d-b8edc83c0455] Running
	I1124 13:57:58.402705  573633 system_pods.go:61] "kindnet-mtrx6" [13e7beb5-16ec-46bf-b0b3-c8b800b38541] Running
	I1124 13:57:58.402715  573633 system_pods.go:61] "kube-apiserver-no-preload-495729" [73e7d6bd-36a7-43fb-87be-1800f46c11bc] Running
	I1124 13:57:58.402721  573633 system_pods.go:61] "kube-controller-manager-no-preload-495729" [786e6d00-16a0-41a3-a6d2-cdd177c24c58] Running
	I1124 13:57:58.402727  573633 system_pods.go:61] "kube-proxy-mxzvp" [2527db35-d2ad-41e5-941e-dec7f072eaad] Running
	I1124 13:57:58.402733  573633 system_pods.go:61] "kube-scheduler-no-preload-495729" [26eb6331-d799-47b4-b6cb-95796575d583] Running
	I1124 13:57:58.402743  573633 system_pods.go:61] "storage-provisioner" [0e767e38-974c-400e-8922-3120c696edf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:58.402750  573633 system_pods.go:74] duration metric: took 2.605391ms to wait for pod list to return data ...
	I1124 13:57:58.402760  573633 default_sa.go:34] waiting for default service account to be created ...
	I1124 13:57:58.404727  573633 default_sa.go:45] found service account: "default"
	I1124 13:57:58.404744  573633 default_sa.go:55] duration metric: took 1.977462ms for default service account to be created ...
	I1124 13:57:58.404751  573633 system_pods.go:116] waiting for k8s-apps to be running ...
	I1124 13:57:58.406749  573633 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:58.406778  573633 system_pods.go:89] "coredns-66bc5c9577-b7t2v" [cfd3642f-4fab-4d58-ac21-5c59c0820cb6] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1124 13:57:58.406785  573633 system_pods.go:89] "etcd-no-preload-495729" [3c702450-6910-48ff-ab8d-b8edc83c0455] Running
	I1124 13:57:58.406791  573633 system_pods.go:89] "kindnet-mtrx6" [13e7beb5-16ec-46bf-b0b3-c8b800b38541] Running
	I1124 13:57:58.406795  573633 system_pods.go:89] "kube-apiserver-no-preload-495729" [73e7d6bd-36a7-43fb-87be-1800f46c11bc] Running
	I1124 13:57:58.406799  573633 system_pods.go:89] "kube-controller-manager-no-preload-495729" [786e6d00-16a0-41a3-a6d2-cdd177c24c58] Running
	I1124 13:57:58.406802  573633 system_pods.go:89] "kube-proxy-mxzvp" [2527db35-d2ad-41e5-941e-dec7f072eaad] Running
	I1124 13:57:58.406806  573633 system_pods.go:89] "kube-scheduler-no-preload-495729" [26eb6331-d799-47b4-b6cb-95796575d583] Running
	I1124 13:57:58.406810  573633 system_pods.go:89] "storage-provisioner" [0e767e38-974c-400e-8922-3120c696edf5] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1124 13:57:58.406833  573633 retry.go:31] will retry after 280.890262ms: missing components: kube-dns
	I1124 13:57:58.691069  573633 system_pods.go:86] 8 kube-system pods found
	I1124 13:57:58.691100  573633 system_pods.go:89] "coredns-66bc5c9577-b7t2v" [cfd3642f-4fab-4d58-ac21-5c59c0820cb6] Running
	I1124 13:57:58.691108  573633 system_pods.go:89] "etcd-no-preload-495729" [3c702450-6910-48ff-ab8d-b8edc83c0455] Running
	I1124 13:57:58.691113  573633 system_pods.go:89] "kindnet-mtrx6" [13e7beb5-16ec-46bf-b0b3-c8b800b38541] Running
	I1124 13:57:58.691123  573633 system_pods.go:89] "kube-apiserver-no-preload-495729" [73e7d6bd-36a7-43fb-87be-1800f46c11bc] Running
	I1124 13:57:58.691129  573633 system_pods.go:89] "kube-controller-manager-no-preload-495729" [786e6d00-16a0-41a3-a6d2-cdd177c24c58] Running
	I1124 13:57:58.691133  573633 system_pods.go:89] "kube-proxy-mxzvp" [2527db35-d2ad-41e5-941e-dec7f072eaad] Running
	I1124 13:57:58.691138  573633 system_pods.go:89] "kube-scheduler-no-preload-495729" [26eb6331-d799-47b4-b6cb-95796575d583] Running
	I1124 13:57:58.691142  573633 system_pods.go:89] "storage-provisioner" [0e767e38-974c-400e-8922-3120c696edf5] Running
	I1124 13:57:58.691152  573633 system_pods.go:126] duration metric: took 286.394896ms to wait for k8s-apps to be running ...
	I1124 13:57:58.691161  573633 system_svc.go:44] waiting for kubelet service to be running ....
	I1124 13:57:58.691221  573633 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:57:58.704298  573633 system_svc.go:56] duration metric: took 13.128643ms WaitForService to wait for kubelet
	I1124 13:57:58.704323  573633 kubeadm.go:587] duration metric: took 14.08956962s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:57:58.704346  573633 node_conditions.go:102] verifying NodePressure condition ...
	I1124 13:57:58.706460  573633 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 13:57:58.706483  573633 node_conditions.go:123] node cpu capacity is 8
	I1124 13:57:58.706498  573633 node_conditions.go:105] duration metric: took 2.144337ms to run NodePressure ...
	I1124 13:57:58.706509  573633 start.go:242] waiting for startup goroutines ...
	I1124 13:57:58.706516  573633 start.go:247] waiting for cluster config update ...
	I1124 13:57:58.706526  573633 start.go:256] writing updated cluster config ...
	I1124 13:57:58.706762  573633 ssh_runner.go:195] Run: rm -f paused
	I1124 13:57:58.710405  573633 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:57:58.713121  573633 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-b7t2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.716314  573633 pod_ready.go:94] pod "coredns-66bc5c9577-b7t2v" is "Ready"
	I1124 13:57:58.716337  573633 pod_ready.go:86] duration metric: took 3.194308ms for pod "coredns-66bc5c9577-b7t2v" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.717767  573633 pod_ready.go:83] waiting for pod "etcd-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.720799  573633 pod_ready.go:94] pod "etcd-no-preload-495729" is "Ready"
	I1124 13:57:58.720832  573633 pod_ready.go:86] duration metric: took 3.047272ms for pod "etcd-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.722338  573633 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.726678  573633 pod_ready.go:94] pod "kube-apiserver-no-preload-495729" is "Ready"
	I1124 13:57:58.726698  573633 pod_ready.go:86] duration metric: took 4.340286ms for pod "kube-apiserver-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:58.728224  573633 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.114568  573633 pod_ready.go:94] pod "kube-controller-manager-no-preload-495729" is "Ready"
	I1124 13:57:59.114594  573633 pod_ready.go:86] duration metric: took 386.354421ms for pod "kube-controller-manager-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.314263  573633 pod_ready.go:83] waiting for pod "kube-proxy-mxzvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.714595  573633 pod_ready.go:94] pod "kube-proxy-mxzvp" is "Ready"
	I1124 13:57:59.714626  573633 pod_ready.go:86] duration metric: took 400.335662ms for pod "kube-proxy-mxzvp" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:59.914675  573633 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:57:55.434961  549693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.056140636s)
	W1124 13:57:55.435004  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:57:55.435016  549693 logs.go:123] Gathering logs for kube-apiserver [d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073] ...
	I1124 13:57:55.435032  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:57:57.968563  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:58:00.313610  573633 pod_ready.go:94] pod "kube-scheduler-no-preload-495729" is "Ready"
	I1124 13:58:00.313638  573633 pod_ready.go:86] duration metric: took 398.934376ms for pod "kube-scheduler-no-preload-495729" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:58:00.313651  573633 pod_ready.go:40] duration metric: took 1.603207509s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 13:58:00.356983  573633 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 13:58:00.358614  573633 out.go:179] * Done! kubectl is now configured to use "no-preload-495729" cluster and "default" namespace by default
	I1124 13:58:02.970719  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:58:02.970792  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:58:02.970959  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:58:03.005315  549693 cri.go:89] found id: "dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	I1124 13:58:03.005336  549693 cri.go:89] found id: "d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073"
	I1124 13:58:03.005340  549693 cri.go:89] found id: ""
	I1124 13:58:03.005347  549693 logs.go:282] 2 containers: [dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338 d5c7a42a438924178c1e3946f6437111c8fac5f01d25a6f15d1f63c567efb073]
	I1124 13:58:03.005404  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:03.010246  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:03.014519  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:58:03.014591  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:58:03.051459  549693 cri.go:89] found id: ""
	I1124 13:58:03.051487  549693 logs.go:282] 0 containers: []
	W1124 13:58:03.051497  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:58:03.051506  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:58:03.051571  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:58:03.080823  549693 cri.go:89] found id: ""
	I1124 13:58:03.080850  549693 logs.go:282] 0 containers: []
	W1124 13:58:03.080870  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:58:03.080878  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:58:03.080944  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:58:03.115054  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:58:03.115074  549693 cri.go:89] found id: ""
	I1124 13:58:03.115082  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:58:03.115124  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:03.120168  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:58:03.120226  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:58:03.154484  549693 cri.go:89] found id: ""
	I1124 13:58:03.154511  549693 logs.go:282] 0 containers: []
	W1124 13:58:03.154522  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:58:03.154530  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:58:03.154584  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:58:03.183948  549693 cri.go:89] found id: "cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:58:03.183965  549693 cri.go:89] found id: "196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121"
	I1124 13:58:03.183969  549693 cri.go:89] found id: ""
	I1124 13:58:03.183977  549693 logs.go:282] 2 containers: [cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573 196609a37eafb369c7245cccb47becbe53c492038396d82778751fd2bc00c121]
	I1124 13:58:03.184033  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:03.188045  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:03.191425  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:58:03.191470  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:58:03.216956  549693 cri.go:89] found id: ""
	I1124 13:58:03.216975  549693 logs.go:282] 0 containers: []
	W1124 13:58:03.216983  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:58:03.216988  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:58:03.217024  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:58:03.242495  549693 cri.go:89] found id: ""
	I1124 13:58:03.242520  549693 logs.go:282] 0 containers: []
	W1124 13:58:03.242529  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:58:03.242546  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:58:03.242558  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:58:03.257759  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:58:03.257779  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	
	
	==> CRI-O <==
	Nov 24 13:57:58 no-preload-495729 crio[775]: time="2025-11-24T13:57:58.472653443Z" level=info msg="Started container" PID=2913 containerID=b7b765df28a0f43ce9e6c0212d86bfe598b0eb7aac63a02bf795d175dcdd4974 description=kube-system/storage-provisioner/storage-provisioner id=606bce79-7579-4d5d-944d-ccc2e9ee34b7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=02177e64c6ca782932a922b0d8f88156c84a1ffdb28a6b55de70b4d9a823ad60
	Nov 24 13:57:58 no-preload-495729 crio[775]: time="2025-11-24T13:57:58.47284072Z" level=info msg="Started container" PID=2914 containerID=e0dc1f18d868d98744a872252e04f292679bd2ea7359fd0e4625dfd081e5552a description=kube-system/coredns-66bc5c9577-b7t2v/coredns id=f668af1b-9452-4272-aa81-530889c4dbf5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7ad2b2ce0983c0511e65a1be8920836bfc11120314ab84d124878433b3fb491f
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.801777805Z" level=info msg="Running pod sandbox: default/busybox/POD" id=6a61f86c-e74a-48b9-b895-b24b0fb3a77d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.801841825Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.806567604Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd5cb7db36df4d275649281ad9afedbbb5d9d538ce69245e5e47842a6d2e4541 UID:bf3a1272-92ff-45db-ba2f-8e360dd19c97 NetNS:/var/run/netns/1eacb093-bad0-49bb-92aa-042f04e07839 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b240}] Aliases:map[]}"
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.806604099Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.815810521Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:cd5cb7db36df4d275649281ad9afedbbb5d9d538ce69245e5e47842a6d2e4541 UID:bf3a1272-92ff-45db-ba2f-8e360dd19c97 NetNS:/var/run/netns/1eacb093-bad0-49bb-92aa-042f04e07839 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008b240}] Aliases:map[]}"
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.816007915Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.816620866Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.817494168Z" level=info msg="Ran pod sandbox cd5cb7db36df4d275649281ad9afedbbb5d9d538ce69245e5e47842a6d2e4541 with infra container: default/busybox/POD" id=6a61f86c-e74a-48b9-b895-b24b0fb3a77d name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.81848802Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c83515cf-ba69-4053-a1ea-5aff40ffc2fd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.8186406Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c83515cf-ba69-4053-a1ea-5aff40ffc2fd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.81867596Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c83515cf-ba69-4053-a1ea-5aff40ffc2fd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.819259675Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6ecfa6f7-70d9-4249-96dc-ab36dcca5d90 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:58:00 no-preload-495729 crio[775]: time="2025-11-24T13:58:00.820667903Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.536725381Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=6ecfa6f7-70d9-4249-96dc-ab36dcca5d90 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.537346935Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=fde3eae2-d1bd-45f2-baca-88c28f8bbd64 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.538755144Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=517b9b63-5740-413d-b60d-b78bdf070bcb name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.541908303Z" level=info msg="Creating container: default/busybox/busybox" id=a1964d70-321c-449e-a9c6-6a8f88aa0026 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.542039428Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.54574758Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.546210089Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.580530431Z" level=info msg="Created container 2c095d86cd0d2c86f677ab8e9ddea7aa18aa5bca8c1ce1a3d7cf4fd9219c5e01: default/busybox/busybox" id=a1964d70-321c-449e-a9c6-6a8f88aa0026 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.581156353Z" level=info msg="Starting container: 2c095d86cd0d2c86f677ab8e9ddea7aa18aa5bca8c1ce1a3d7cf4fd9219c5e01" id=7732e880-e202-41f7-ac24-6dc6e6ef2a10 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:01 no-preload-495729 crio[775]: time="2025-11-24T13:58:01.583066389Z" level=info msg="Started container" PID=2987 containerID=2c095d86cd0d2c86f677ab8e9ddea7aa18aa5bca8c1ce1a3d7cf4fd9219c5e01 description=default/busybox/busybox id=7732e880-e202-41f7-ac24-6dc6e6ef2a10 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cd5cb7db36df4d275649281ad9afedbbb5d9d538ce69245e5e47842a6d2e4541
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	2c095d86cd0d2       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   cd5cb7db36df4       busybox                                     default
	e0dc1f18d868d       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      10 seconds ago      Running             coredns                   0                   7ad2b2ce0983c       coredns-66bc5c9577-b7t2v                    kube-system
	b7b765df28a0f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      10 seconds ago      Running             storage-provisioner       0                   02177e64c6ca7       storage-provisioner                         kube-system
	c3be08dea5f28       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    21 seconds ago      Running             kindnet-cni               0                   504c918a05afd       kindnet-mtrx6                               kube-system
	a167d8deb86b5       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   e796e7d4276d6       kube-proxy-mxzvp                            kube-system
	9881cf14082fe       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   461b399293edd       kube-controller-manager-no-preload-495729   kube-system
	79740019ba645       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   f57b0965553f2       kube-scheduler-no-preload-495729            kube-system
	55dae3b43661c       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   f846799009a4b       etcd-no-preload-495729                      kube-system
	1aa282f473ef2       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   bb11f456db36f       kube-apiserver-no-preload-495729            kube-system
	
	
	==> coredns [e0dc1f18d868d98744a872252e04f292679bd2ea7359fd0e4625dfd081e5552a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49373 - 5957 "HINFO IN 6510379941964851419.1932545369363292499. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.121928566s
	
	
	==> describe nodes <==
	Name:               no-preload-495729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-495729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-495729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_57_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-495729
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:57:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:57:58 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:57:58 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:57:58 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:57:58 +0000   Mon, 24 Nov 2025 13:57:58 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-495729
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                b9ead28d-5d73-474f-b9bc-4fe7bfd306f8
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 coredns-66bc5c9577-b7t2v                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-no-preload-495729                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-mtrx6                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-no-preload-495729             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-no-preload-495729    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-mxzvp                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-no-preload-495729             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 34s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 34s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 34s)  kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 34s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node no-preload-495729 event: Registered Node no-preload-495729 in Controller
	  Normal  NodeReady                10s                kubelet          Node no-preload-495729 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [55dae3b43661cb2307864608eee5120fd24c455c39d2364fdbe8bfaeda80e1ec] <==
	{"level":"warn","ts":"2025-11-24T13:57:36.227975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.234188Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37294","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.242410Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37320","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.249538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37346","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.256438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37354","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.263262Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.270231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37398","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.277776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37408","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.284472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.298527Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37468","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.315875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37478","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.319904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37492","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.327187Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37504","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.334844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37520","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.343592Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37540","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.350784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.356910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37558","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.363228Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37574","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.369270Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.376116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37608","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.383744Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37616","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.400982Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37634","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.406931Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37640","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.412872Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:57:36.463490Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37680","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 13:58:09 up  2:40,  0 user,  load average: 3.06, 3.22, 2.01
	Linux no-preload-495729 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [c3be08dea5f28161bff99846bddf4de5b9a58d673bd863ca767a9170fb0d9d90] <==
	I1124 13:57:47.646824       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:57:47.647031       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:57:47.647153       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:57:47.647168       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:57:47.647185       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:57:47Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:57:47.848619       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:57:47.848641       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:57:47.848651       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:57:47.848764       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:57:48.248716       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:57:48.248758       1 metrics.go:72] Registering metrics
	I1124 13:57:48.248816       1 controller.go:711] "Syncing nftables rules"
	I1124 13:57:57.851101       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:57:57.851209       1 main.go:301] handling current node
	I1124 13:58:07.852954       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:58:07.852985       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1aa282f473ef2d9edb5208543ffcf6806ee953a175be37c40583fbb142dec96e] <==
	I1124 13:57:36.938257       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 13:57:36.938263       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:57:36.938269       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:57:36.938320       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1124 13:57:36.942182       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:57:36.960767       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:57:36.972287       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:57:37.892792       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:57:37.957799       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:57:37.957824       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:57:38.468523       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:57:38.510259       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:57:38.644335       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:57:38.652769       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 13:57:38.654091       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:57:38.660045       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:57:38.890774       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:57:39.644708       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:57:39.654753       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:57:39.664819       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:57:43.943202       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:57:43.946125       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:57:44.644249       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:57:44.693155       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 13:58:07.579828       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8443->192.168.103.1:36874: use of closed network connection
	
	
	==> kube-controller-manager [9881cf14082feae62e3c12c2c3ffc39ceefc89b58a1c2c9b091e07a232660972] <==
	I1124 13:57:43.842591       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:57:43.842597       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:57:43.842603       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:57:43.843936       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:57:43.848170       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:57:43.848978       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="no-preload-495729" podCIDRs=["10.244.0.0/24"]
	I1124 13:57:43.887580       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:57:43.888756       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:57:43.888781       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 13:57:43.888867       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 13:57:43.888937       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:57:43.888952       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:57:43.888971       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 13:57:43.889416       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:57:43.889943       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:57:43.889962       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1124 13:57:43.891104       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:57:43.892228       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:57:43.892258       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:57:43.894495       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:57:43.894533       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:57:43.899686       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:57:43.902956       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:57:43.905105       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:57:58.825842       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a167d8deb86b55c6205765f1efebe19fde5d672efa021d8548bdf437e3fd05e9] <==
	I1124 13:57:45.708477       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:57:45.774422       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:57:45.874959       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:57:45.874986       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:57:45.875072       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:57:45.893107       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:57:45.893156       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:57:45.898544       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:57:45.899196       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:57:45.899256       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:57:45.900962       1 config.go:200] "Starting service config controller"
	I1124 13:57:45.900980       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:57:45.901003       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:57:45.901015       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:57:45.901108       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:57:45.901120       1 config.go:309] "Starting node config controller"
	I1124 13:57:45.901129       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:57:45.901128       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:57:46.001725       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:57:46.001804       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:57:46.001920       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 13:57:46.001931       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [79740019ba6453aa721e05e9c36e040c2cb9b51025d0bcc9128f1d65e311eea5] <==
	E1124 13:57:36.955761       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:57:36.955592       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:57:36.957364       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:57:36.957383       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:57:36.957503       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:57:36.957505       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:57:36.957608       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:57:36.957631       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:57:36.957651       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:57:36.957698       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:57:36.957724       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:57:36.957995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:57:36.958546       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:57:36.958653       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:57:37.785184       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:57:37.807253       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:57:37.902517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:57:37.927584       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:57:37.979391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:57:38.134979       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:57:38.161347       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:57:38.231450       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:57:38.250519       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:57:38.258689       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	I1124 13:57:39.954562       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:57:43 no-preload-495729 kubelet[2310]: I1124 13:57:43.856356    2310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748394    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/13e7beb5-16ec-46bf-b0b3-c8b800b38541-cni-cfg\") pod \"kindnet-mtrx6\" (UID: \"13e7beb5-16ec-46bf-b0b3-c8b800b38541\") " pod="kube-system/kindnet-mtrx6"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748452    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13e7beb5-16ec-46bf-b0b3-c8b800b38541-lib-modules\") pod \"kindnet-mtrx6\" (UID: \"13e7beb5-16ec-46bf-b0b3-c8b800b38541\") " pod="kube-system/kindnet-mtrx6"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748481    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2527db35-d2ad-41e5-941e-dec7f072eaad-kube-proxy\") pod \"kube-proxy-mxzvp\" (UID: \"2527db35-d2ad-41e5-941e-dec7f072eaad\") " pod="kube-system/kube-proxy-mxzvp"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748500    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2527db35-d2ad-41e5-941e-dec7f072eaad-xtables-lock\") pod \"kube-proxy-mxzvp\" (UID: \"2527db35-d2ad-41e5-941e-dec7f072eaad\") " pod="kube-system/kube-proxy-mxzvp"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748520    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13e7beb5-16ec-46bf-b0b3-c8b800b38541-xtables-lock\") pod \"kindnet-mtrx6\" (UID: \"13e7beb5-16ec-46bf-b0b3-c8b800b38541\") " pod="kube-system/kindnet-mtrx6"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748544    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-28p79\" (UniqueName: \"kubernetes.io/projected/13e7beb5-16ec-46bf-b0b3-c8b800b38541-kube-api-access-28p79\") pod \"kindnet-mtrx6\" (UID: \"13e7beb5-16ec-46bf-b0b3-c8b800b38541\") " pod="kube-system/kindnet-mtrx6"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748567    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2527db35-d2ad-41e5-941e-dec7f072eaad-lib-modules\") pod \"kube-proxy-mxzvp\" (UID: \"2527db35-d2ad-41e5-941e-dec7f072eaad\") " pod="kube-system/kube-proxy-mxzvp"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: I1124 13:57:44.748589    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlb2h\" (UniqueName: \"kubernetes.io/projected/2527db35-d2ad-41e5-941e-dec7f072eaad-kube-api-access-nlb2h\") pod \"kube-proxy-mxzvp\" (UID: \"2527db35-d2ad-41e5-941e-dec7f072eaad\") " pod="kube-system/kube-proxy-mxzvp"
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: E1124 13:57:44.855680    2310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: E1124 13:57:44.855715    2310 projected.go:196] Error preparing data for projected volume kube-api-access-28p79 for pod kube-system/kindnet-mtrx6: configmap "kube-root-ca.crt" not found
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: E1124 13:57:44.855791    2310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/13e7beb5-16ec-46bf-b0b3-c8b800b38541-kube-api-access-28p79 podName:13e7beb5-16ec-46bf-b0b3-c8b800b38541 nodeName:}" failed. No retries permitted until 2025-11-24 13:57:45.355761018 +0000 UTC m=+5.929231606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-28p79" (UniqueName: "kubernetes.io/projected/13e7beb5-16ec-46bf-b0b3-c8b800b38541-kube-api-access-28p79") pod "kindnet-mtrx6" (UID: "13e7beb5-16ec-46bf-b0b3-c8b800b38541") : configmap "kube-root-ca.crt" not found
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: E1124 13:57:44.856746    2310 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: E1124 13:57:44.856774    2310 projected.go:196] Error preparing data for projected volume kube-api-access-nlb2h for pod kube-system/kube-proxy-mxzvp: configmap "kube-root-ca.crt" not found
	Nov 24 13:57:44 no-preload-495729 kubelet[2310]: E1124 13:57:44.856824    2310 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2527db35-d2ad-41e5-941e-dec7f072eaad-kube-api-access-nlb2h podName:2527db35-d2ad-41e5-941e-dec7f072eaad nodeName:}" failed. No retries permitted until 2025-11-24 13:57:45.356806166 +0000 UTC m=+5.930276766 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nlb2h" (UniqueName: "kubernetes.io/projected/2527db35-d2ad-41e5-941e-dec7f072eaad-kube-api-access-nlb2h") pod "kube-proxy-mxzvp" (UID: "2527db35-d2ad-41e5-941e-dec7f072eaad") : configmap "kube-root-ca.crt" not found
	Nov 24 13:57:46 no-preload-495729 kubelet[2310]: I1124 13:57:46.584929    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mxzvp" podStartSLOduration=2.584906911 podStartE2EDuration="2.584906911s" podCreationTimestamp="2025-11-24 13:57:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:46.583744958 +0000 UTC m=+7.157215565" watchObservedRunningTime="2025-11-24 13:57:46.584906911 +0000 UTC m=+7.158377519"
	Nov 24 13:57:47 no-preload-495729 kubelet[2310]: I1124 13:57:47.583811    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-mtrx6" podStartSLOduration=1.747741382 podStartE2EDuration="3.583793931s" podCreationTimestamp="2025-11-24 13:57:44 +0000 UTC" firstStartedPulling="2025-11-24 13:57:45.622052822 +0000 UTC m=+6.195523412" lastFinishedPulling="2025-11-24 13:57:47.458105374 +0000 UTC m=+8.031575961" observedRunningTime="2025-11-24 13:57:47.581270935 +0000 UTC m=+8.154741548" watchObservedRunningTime="2025-11-24 13:57:47.583793931 +0000 UTC m=+8.157264537"
	Nov 24 13:57:58 no-preload-495729 kubelet[2310]: I1124 13:57:58.102698    2310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:57:58 no-preload-495729 kubelet[2310]: I1124 13:57:58.238351    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cfd3642f-4fab-4d58-ac21-5c59c0820cb6-config-volume\") pod \"coredns-66bc5c9577-b7t2v\" (UID: \"cfd3642f-4fab-4d58-ac21-5c59c0820cb6\") " pod="kube-system/coredns-66bc5c9577-b7t2v"
	Nov 24 13:57:58 no-preload-495729 kubelet[2310]: I1124 13:57:58.238392    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9kk6\" (UniqueName: \"kubernetes.io/projected/cfd3642f-4fab-4d58-ac21-5c59c0820cb6-kube-api-access-k9kk6\") pod \"coredns-66bc5c9577-b7t2v\" (UID: \"cfd3642f-4fab-4d58-ac21-5c59c0820cb6\") " pod="kube-system/coredns-66bc5c9577-b7t2v"
	Nov 24 13:57:58 no-preload-495729 kubelet[2310]: I1124 13:57:58.238418    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/0e767e38-974c-400e-8922-3120c696edf5-tmp\") pod \"storage-provisioner\" (UID: \"0e767e38-974c-400e-8922-3120c696edf5\") " pod="kube-system/storage-provisioner"
	Nov 24 13:57:58 no-preload-495729 kubelet[2310]: I1124 13:57:58.238445    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8zjt\" (UniqueName: \"kubernetes.io/projected/0e767e38-974c-400e-8922-3120c696edf5-kube-api-access-p8zjt\") pod \"storage-provisioner\" (UID: \"0e767e38-974c-400e-8922-3120c696edf5\") " pod="kube-system/storage-provisioner"
	Nov 24 13:57:58 no-preload-495729 kubelet[2310]: I1124 13:57:58.601306    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.601262493 podStartE2EDuration="13.601262493s" podCreationTimestamp="2025-11-24 13:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:58.601044002 +0000 UTC m=+19.174514610" watchObservedRunningTime="2025-11-24 13:57:58.601262493 +0000 UTC m=+19.174733101"
	Nov 24 13:58:00 no-preload-495729 kubelet[2310]: I1124 13:58:00.495421    2310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-b7t2v" podStartSLOduration=15.495398935 podStartE2EDuration="15.495398935s" podCreationTimestamp="2025-11-24 13:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:57:58.609519449 +0000 UTC m=+19.182990057" watchObservedRunningTime="2025-11-24 13:58:00.495398935 +0000 UTC m=+21.068869545"
	Nov 24 13:58:00 no-preload-495729 kubelet[2310]: I1124 13:58:00.554820    2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mkhf\" (UniqueName: \"kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf\") pod \"busybox\" (UID: \"bf3a1272-92ff-45db-ba2f-8e360dd19c97\") " pod="default/busybox"
	
	
	==> storage-provisioner [b7b765df28a0f43ce9e6c0212d86bfe598b0eb7aac63a02bf795d175dcdd4974] <==
	I1124 13:57:58.485169       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:57:58.492712       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:57:58.492750       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:57:58.494751       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:57:58.499020       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:57:58.499157       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:57:58.499306       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-495729_5e553fa0-3c53-4ab4-aac2-3f109218d065!
	I1124 13:57:58.499343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e273147f-7a8c-4890-a226-24f14430939c", APIVersion:"v1", ResourceVersion:"448", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-495729_5e553fa0-3c53-4ab4-aac2-3f109218d065 became leader
	W1124 13:57:58.500635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:57:58.503866       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:57:58.600098       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-495729_5e553fa0-3c53-4ab4-aac2-3f109218d065!
	W1124 13:58:00.506027       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:00.508922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:02.511309       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:02.515285       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:04.518077       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:04.522310       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:06.524340       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:06.527616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:08.530788       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:58:08.534165       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-495729 -n no-preload-495729
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-495729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (6.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-495729 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-495729 --alsologtostderr -v=1: exit status 80 (2.502680963s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-495729 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:59:05.948652  594804 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:05.948918  594804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:05.948930  594804 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:05.948937  594804 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:05.949358  594804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:59:05.949758  594804 out.go:368] Setting JSON to false
	I1124 13:59:05.949811  594804 mustload.go:66] Loading cluster: no-preload-495729
	I1124 13:59:05.950594  594804 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:05.951051  594804 cli_runner.go:164] Run: docker container inspect no-preload-495729 --format={{.State.Status}}
	I1124 13:59:05.968357  594804 host.go:66] Checking if "no-preload-495729" exists ...
	I1124 13:59:05.968585  594804 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:06.029007  594804 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 13:59:06.018587207 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:06.029797  594804 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-495729 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 13:59:06.031573  594804 out.go:179] * Pausing node no-preload-495729 ... 
	I1124 13:59:06.032648  594804 host.go:66] Checking if "no-preload-495729" exists ...
	I1124 13:59:06.033002  594804 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:06.033051  594804 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-495729
	I1124 13:59:06.052712  594804 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33443 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/no-preload-495729/id_rsa Username:docker}
	I1124 13:59:06.157723  594804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:06.170201  594804 pause.go:52] kubelet running: true
	I1124 13:59:06.170264  594804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:06.347224  594804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:06.347306  594804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:06.418868  594804 cri.go:89] found id: "f8ae90386bcd1263701dc3942191b95109eee84dbc38e65217bebf360af1be31"
	I1124 13:59:06.418906  594804 cri.go:89] found id: "2a07af39c8cfa9f54acb40356ea5d9c755cef6b9908261e0552df8388a4e4b5b"
	I1124 13:59:06.418913  594804 cri.go:89] found id: "0e39be3eda51444362a2e29b8512e0cd1c604619e642090db9c1ba4832ceac50"
	I1124 13:59:06.418919  594804 cri.go:89] found id: "903c84e46ccdef1d6896c69c09ebe9a3407439b6081697d0a3f3cb40af80da77"
	I1124 13:59:06.418923  594804 cri.go:89] found id: "c5414bcf8f37eefc1509b423927ff2e9afae879fa089646bfe236c7e8838f941"
	I1124 13:59:06.418928  594804 cri.go:89] found id: "62030928e1d177cbf0ad8f12916eb88c433896f670073ef28f5784598ad3be2b"
	I1124 13:59:06.418932  594804 cri.go:89] found id: "fe3faa18594eff20855f0be9dc75861d22f7a99057fb8fd3c24ff01eaf028868"
	I1124 13:59:06.418937  594804 cri.go:89] found id: "157c65960e2b04d4d57edcc130777d480830cee904e929103df8bc888e89eb35"
	I1124 13:59:06.418942  594804 cri.go:89] found id: "9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2"
	I1124 13:59:06.418954  594804 cri.go:89] found id: "cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	I1124 13:59:06.418962  594804 cri.go:89] found id: ""
	I1124 13:59:06.419007  594804 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:06.431862  594804 retry.go:31] will retry after 337.171192ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:06Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:59:06.769373  594804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:06.783229  594804 pause.go:52] kubelet running: false
	I1124 13:59:06.783286  594804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:06.925497  594804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:06.925569  594804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:06.991579  594804 cri.go:89] found id: "f8ae90386bcd1263701dc3942191b95109eee84dbc38e65217bebf360af1be31"
	I1124 13:59:06.991602  594804 cri.go:89] found id: "2a07af39c8cfa9f54acb40356ea5d9c755cef6b9908261e0552df8388a4e4b5b"
	I1124 13:59:06.991607  594804 cri.go:89] found id: "0e39be3eda51444362a2e29b8512e0cd1c604619e642090db9c1ba4832ceac50"
	I1124 13:59:06.991610  594804 cri.go:89] found id: "903c84e46ccdef1d6896c69c09ebe9a3407439b6081697d0a3f3cb40af80da77"
	I1124 13:59:06.991613  594804 cri.go:89] found id: "c5414bcf8f37eefc1509b423927ff2e9afae879fa089646bfe236c7e8838f941"
	I1124 13:59:06.991617  594804 cri.go:89] found id: "62030928e1d177cbf0ad8f12916eb88c433896f670073ef28f5784598ad3be2b"
	I1124 13:59:06.991620  594804 cri.go:89] found id: "fe3faa18594eff20855f0be9dc75861d22f7a99057fb8fd3c24ff01eaf028868"
	I1124 13:59:06.991622  594804 cri.go:89] found id: "157c65960e2b04d4d57edcc130777d480830cee904e929103df8bc888e89eb35"
	I1124 13:59:06.991625  594804 cri.go:89] found id: "9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2"
	I1124 13:59:06.991633  594804 cri.go:89] found id: "cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	I1124 13:59:06.991638  594804 cri.go:89] found id: ""
	I1124 13:59:06.991672  594804 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:07.003078  594804 retry.go:31] will retry after 275.554644ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:07Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:59:07.279567  594804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:07.292597  594804 pause.go:52] kubelet running: false
	I1124 13:59:07.292656  594804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:07.442245  594804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:07.442322  594804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:07.508581  594804 cri.go:89] found id: "f8ae90386bcd1263701dc3942191b95109eee84dbc38e65217bebf360af1be31"
	I1124 13:59:07.508604  594804 cri.go:89] found id: "2a07af39c8cfa9f54acb40356ea5d9c755cef6b9908261e0552df8388a4e4b5b"
	I1124 13:59:07.508610  594804 cri.go:89] found id: "0e39be3eda51444362a2e29b8512e0cd1c604619e642090db9c1ba4832ceac50"
	I1124 13:59:07.508615  594804 cri.go:89] found id: "903c84e46ccdef1d6896c69c09ebe9a3407439b6081697d0a3f3cb40af80da77"
	I1124 13:59:07.508618  594804 cri.go:89] found id: "c5414bcf8f37eefc1509b423927ff2e9afae879fa089646bfe236c7e8838f941"
	I1124 13:59:07.508621  594804 cri.go:89] found id: "62030928e1d177cbf0ad8f12916eb88c433896f670073ef28f5784598ad3be2b"
	I1124 13:59:07.508624  594804 cri.go:89] found id: "fe3faa18594eff20855f0be9dc75861d22f7a99057fb8fd3c24ff01eaf028868"
	I1124 13:59:07.508627  594804 cri.go:89] found id: "157c65960e2b04d4d57edcc130777d480830cee904e929103df8bc888e89eb35"
	I1124 13:59:07.508630  594804 cri.go:89] found id: "9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2"
	I1124 13:59:07.508648  594804 cri.go:89] found id: "cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	I1124 13:59:07.508654  594804 cri.go:89] found id: ""
	I1124 13:59:07.508692  594804 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:07.521800  594804 retry.go:31] will retry after 627.163239ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:07Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:59:08.149582  594804 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:08.162511  594804 pause.go:52] kubelet running: false
	I1124 13:59:08.162556  594804 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:08.298695  594804 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:08.298773  594804 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:08.362544  594804 cri.go:89] found id: "f8ae90386bcd1263701dc3942191b95109eee84dbc38e65217bebf360af1be31"
	I1124 13:59:08.362566  594804 cri.go:89] found id: "2a07af39c8cfa9f54acb40356ea5d9c755cef6b9908261e0552df8388a4e4b5b"
	I1124 13:59:08.362571  594804 cri.go:89] found id: "0e39be3eda51444362a2e29b8512e0cd1c604619e642090db9c1ba4832ceac50"
	I1124 13:59:08.362575  594804 cri.go:89] found id: "903c84e46ccdef1d6896c69c09ebe9a3407439b6081697d0a3f3cb40af80da77"
	I1124 13:59:08.362579  594804 cri.go:89] found id: "c5414bcf8f37eefc1509b423927ff2e9afae879fa089646bfe236c7e8838f941"
	I1124 13:59:08.362583  594804 cri.go:89] found id: "62030928e1d177cbf0ad8f12916eb88c433896f670073ef28f5784598ad3be2b"
	I1124 13:59:08.362587  594804 cri.go:89] found id: "fe3faa18594eff20855f0be9dc75861d22f7a99057fb8fd3c24ff01eaf028868"
	I1124 13:59:08.362592  594804 cri.go:89] found id: "157c65960e2b04d4d57edcc130777d480830cee904e929103df8bc888e89eb35"
	I1124 13:59:08.362596  594804 cri.go:89] found id: "9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2"
	I1124 13:59:08.362605  594804 cri.go:89] found id: "cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	I1124 13:59:08.362609  594804 cri.go:89] found id: ""
	I1124 13:59:08.362660  594804 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:08.378006  594804 out.go:203] 
	W1124 13:59:08.379144  594804 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:08Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:08Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:59:08.379166  594804 out.go:285] * 
	* 
	W1124 13:59:08.383833  594804 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:59:08.385443  594804 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-495729 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-495729
helpers_test.go:243: (dbg) docker inspect no-preload-495729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791",
	        "Created": "2025-11-24T13:57:11.035074993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587288,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:58:28.316605422Z",
	            "FinishedAt": "2025-11-24T13:58:27.452360541Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/hostname",
	        "HostsPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/hosts",
	        "LogPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791-json.log",
	        "Name": "/no-preload-495729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-495729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-495729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791",
	                "LowerDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-495729",
	                "Source": "/var/lib/docker/volumes/no-preload-495729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-495729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-495729",
	                "name.minikube.sigs.k8s.io": "no-preload-495729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b37304afa7aa2a53a434e095496b7dac05da45a8d316120ce156dd372326d47",
	            "SandboxKey": "/var/run/docker/netns/6b37304afa7a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-495729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "160c86453933d759975010a4980c48a41dc82dff079fabd600f1a15b1aa5b6c8",
	                    "EndpointID": "ff301e1f9c5f1022bdaa770f8b655d1a873cf11e15cfbd32c37c72c260ae776f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9e:1c:b4:3f:6a:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-495729",
	                        "93c1bfb2fd2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729: exit status 2 (319.514687ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-495729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-495729 logs -n 25: (1.049334085s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-165759 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo containerd config dump                                                                                                                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo crio config                                                                                                                                                                                                             │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ delete  │ -p cilium-165759                                                                                                                                                                                                                              │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:57 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p old-k8s-version-551674 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p no-preload-495729 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-551674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:58:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:58:57.832535  592938 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:58:57.832784  592938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:57.832794  592938 out.go:374] Setting ErrFile to fd 2...
	I1124 13:58:57.832798  592938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:57.833029  592938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:58:57.833459  592938 out.go:368] Setting JSON to false
	I1124 13:58:57.834575  592938 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9685,"bootTime":1763983053,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:58:57.834626  592938 start.go:143] virtualization: kvm guest
	I1124 13:58:57.836330  592938 out.go:179] * [embed-certs-456660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:58:57.837507  592938 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:58:57.837523  592938 notify.go:221] Checking for updates...
	I1124 13:58:57.839921  592938 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:58:57.841058  592938 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:58:57.842257  592938 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:58:57.843359  592938 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:58:57.844387  592938 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:58:57.845792  592938 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:58:57.845882  592938 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:58:57.845974  592938 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:58:57.846067  592938 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:58:57.871196  592938 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:58:57.871338  592938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:58:57.922978  592938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:58:57.913490696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:58:57.923135  592938 docker.go:319] overlay module found
	I1124 13:58:57.924579  592938 out.go:179] * Using the docker driver based on user configuration
	I1124 13:58:57.925832  592938 start.go:309] selected driver: docker
	I1124 13:58:57.925846  592938 start.go:927] validating driver "docker" against <nil>
	I1124 13:58:57.925861  592938 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:58:57.926462  592938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:58:57.983452  592938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:58:57.974403576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:58:57.983659  592938 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:58:57.983967  592938 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:58:57.985479  592938 out.go:179] * Using Docker driver with root privileges
	I1124 13:58:57.986499  592938 cni.go:84] Creating CNI manager for ""
	I1124 13:58:57.986562  592938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:58:57.986573  592938 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:58:57.986629  592938 start.go:353] cluster config:
	{Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:58:57.987825  592938 out.go:179] * Starting "embed-certs-456660" primary control-plane node in "embed-certs-456660" cluster
	I1124 13:58:57.988771  592938 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:58:57.989832  592938 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:58:57.990822  592938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:58:57.990848  592938 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:58:57.990855  592938 cache.go:65] Caching tarball of preloaded images
	I1124 13:58:57.990928  592938 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:58:57.990965  592938 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:58:57.990975  592938 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:58:57.991053  592938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/config.json ...
	I1124 13:58:57.991070  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/config.json: {Name:mkca432673f6444f2dbfb76f82936003fbc963b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:58:58.009966  592938 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:58:58.009988  592938 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:58:58.010006  592938 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:58:58.010051  592938 start.go:360] acquireMachinesLock for embed-certs-456660: {Name:mkcb8e616ba1a5200ca9ad17486605327176dd6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:58:58.010136  592938 start.go:364] duration metric: took 70.207µs to acquireMachinesLock for "embed-certs-456660"
	I1124 13:58:58.010158  592938 start.go:93] Provisioning new machine with config: &{Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:58:58.010225  592938 start.go:125] createHost starting for "" (driver="docker")
	W1124 13:58:56.024633  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	W1124 13:58:58.025441  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	I1124 13:58:55.282243  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:58:55.282319  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:58:55.282387  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:58:55.313555  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:58:55.313575  549693 cri.go:89] found id: "dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	I1124 13:58:55.313579  549693 cri.go:89] found id: ""
	I1124 13:58:55.313587  549693 logs.go:282] 2 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338]
	I1124 13:58:55.313636  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.318598  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.322381  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:58:55.322446  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:58:55.349491  549693 cri.go:89] found id: ""
	I1124 13:58:55.349514  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.349524  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:58:55.349531  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:58:55.349585  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:58:55.376967  549693 cri.go:89] found id: ""
	I1124 13:58:55.376993  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.377003  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:58:55.377011  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:58:55.377062  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:58:55.402361  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:58:55.402385  549693 cri.go:89] found id: ""
	I1124 13:58:55.402395  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:58:55.402444  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.406090  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:58:55.406150  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:58:55.431275  549693 cri.go:89] found id: ""
	I1124 13:58:55.431298  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.431308  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:58:55.431315  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:58:55.431362  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:58:55.457228  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:58:55.457253  549693 cri.go:89] found id: "cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:58:55.457259  549693 cri.go:89] found id: ""
	I1124 13:58:55.457269  549693 logs.go:282] 2 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573]
	I1124 13:58:55.457321  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.461016  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.464625  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:58:55.464689  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:58:55.489953  549693 cri.go:89] found id: ""
	I1124 13:58:55.489977  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.489986  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:58:55.489993  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:58:55.490053  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:58:55.514315  549693 cri.go:89] found id: ""
	I1124 13:58:55.514337  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.514347  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:58:55.514366  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:58:55.514382  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:58:55.540243  549693 logs.go:123] Gathering logs for kube-controller-manager [cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573] ...
	I1124 13:58:55.540268  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:58:55.566382  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:58:55.566403  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:58:55.650749  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:58:55.650774  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 13:58:58.012302  592938 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:58:58.012480  592938 start.go:159] libmachine.API.Create for "embed-certs-456660" (driver="docker")
	I1124 13:58:58.012506  592938 client.go:173] LocalClient.Create starting
	I1124 13:58:58.012592  592938 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:58:58.012621  592938 main.go:143] libmachine: Decoding PEM data...
	I1124 13:58:58.012638  592938 main.go:143] libmachine: Parsing certificate...
	I1124 13:58:58.012684  592938 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:58:58.012705  592938 main.go:143] libmachine: Decoding PEM data...
	I1124 13:58:58.012718  592938 main.go:143] libmachine: Parsing certificate...
	I1124 13:58:58.013087  592938 cli_runner.go:164] Run: docker network inspect embed-certs-456660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:58:58.028580  592938 cli_runner.go:211] docker network inspect embed-certs-456660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:58:58.028637  592938 network_create.go:284] running [docker network inspect embed-certs-456660] to gather additional debugging logs...
	I1124 13:58:58.028655  592938 cli_runner.go:164] Run: docker network inspect embed-certs-456660
	W1124 13:58:58.044814  592938 cli_runner.go:211] docker network inspect embed-certs-456660 returned with exit code 1
	I1124 13:58:58.044836  592938 network_create.go:287] error running [docker network inspect embed-certs-456660]: docker network inspect embed-certs-456660: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-456660 not found
	I1124 13:58:58.044855  592938 network_create.go:289] output of [docker network inspect embed-certs-456660]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-456660 not found
	
	** /stderr **
	I1124 13:58:58.044983  592938 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:58:58.061306  592938 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 13:58:58.062125  592938 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 13:58:58.062591  592938 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 13:58:58.063158  592938 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-283ea71f66a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:70:12:a2:88:dd} reservation:<nil>}
	I1124 13:58:58.064069  592938 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed18b0}
	I1124 13:58:58.064102  592938 network_create.go:124] attempt to create docker network embed-certs-456660 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 13:58:58.064156  592938 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-456660 embed-certs-456660
	I1124 13:58:58.110444  592938 network_create.go:108] docker network embed-certs-456660 192.168.85.0/24 created
	I1124 13:58:58.110471  592938 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-456660" container
	I1124 13:58:58.110535  592938 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:58:58.127093  592938 cli_runner.go:164] Run: docker volume create embed-certs-456660 --label name.minikube.sigs.k8s.io=embed-certs-456660 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:58:58.143940  592938 oci.go:103] Successfully created a docker volume embed-certs-456660
	I1124 13:58:58.144007  592938 cli_runner.go:164] Run: docker run --rm --name embed-certs-456660-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-456660 --entrypoint /usr/bin/test -v embed-certs-456660:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:58:58.528917  592938 oci.go:107] Successfully prepared a docker volume embed-certs-456660
	I1124 13:58:58.528984  592938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:58:58.528994  592938 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:58:58.529053  592938 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-456660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 13:59:00.525707  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	W1124 13:59:02.558153  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	I1124 13:59:02.862538  592938 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-456660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.333444773s)
	I1124 13:59:02.862585  592938 kic.go:203] duration metric: took 4.333571339s to extract preloaded images to volume ...
	W1124 13:59:02.862682  592938 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:59:02.862738  592938 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:59:02.862787  592938 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:59:02.917704  592938 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-456660 --name embed-certs-456660 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-456660 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-456660 --network embed-certs-456660 --ip 192.168.85.2 --volume embed-certs-456660:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:59:03.226966  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Running}}
	I1124 13:59:03.243537  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:03.260968  592938 cli_runner.go:164] Run: docker exec embed-certs-456660 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:59:03.307840  592938 oci.go:144] the created container "embed-certs-456660" has a running status.
	I1124 13:59:03.307871  592938 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa...
	I1124 13:59:03.477717  592938 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:59:03.503630  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:03.524364  592938 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:59:03.524392  592938 kic_runner.go:114] Args: [docker exec --privileged embed-certs-456660 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:59:03.581120  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:03.601366  592938 machine.go:94] provisionDockerMachine start ...
	I1124 13:59:03.601448  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:03.619901  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:03.620336  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:03.620361  592938 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:59:03.764471  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-456660
	
	I1124 13:59:03.764503  592938 ubuntu.go:182] provisioning hostname "embed-certs-456660"
	I1124 13:59:03.764577  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:03.783422  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:03.783695  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:03.783714  592938 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-456660 && echo "embed-certs-456660" | sudo tee /etc/hostname
	I1124 13:59:03.935840  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-456660
	
	I1124 13:59:03.935949  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:03.952825  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:03.953098  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:03.953118  592938 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-456660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-456660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-456660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:59:04.092930  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:59:04.092975  592938 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:59:04.093013  592938 ubuntu.go:190] setting up certificates
	I1124 13:59:04.093033  592938 provision.go:84] configureAuth start
	I1124 13:59:04.093081  592938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-456660
	I1124 13:59:04.109184  592938 provision.go:143] copyHostCerts
	I1124 13:59:04.109236  592938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:59:04.109245  592938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:59:04.109308  592938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:59:04.109387  592938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:59:04.109395  592938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:59:04.109420  592938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:59:04.109474  592938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:59:04.109482  592938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:59:04.109503  592938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:59:04.109553  592938 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.embed-certs-456660 san=[127.0.0.1 192.168.85.2 embed-certs-456660 localhost minikube]
	I1124 13:59:04.180674  592938 provision.go:177] copyRemoteCerts
	I1124 13:59:04.180723  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:59:04.180766  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.197050  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.296652  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:59:04.315921  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 13:59:04.333432  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:59:04.350492  592938 provision.go:87] duration metric: took 257.44313ms to configureAuth
	I1124 13:59:04.350515  592938 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:59:04.350689  592938 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:04.350815  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.367314  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:04.367554  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:04.367591  592938 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:59:04.647029  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:59:04.647057  592938 machine.go:97] duration metric: took 1.045667903s to provisionDockerMachine
	I1124 13:59:04.647067  592938 client.go:176] duration metric: took 6.634553448s to LocalClient.Create
	I1124 13:59:04.647087  592938 start.go:167] duration metric: took 6.634607347s to libmachine.API.Create "embed-certs-456660"
	I1124 13:59:04.647097  592938 start.go:293] postStartSetup for "embed-certs-456660" (driver="docker")
	I1124 13:59:04.647110  592938 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:59:04.647183  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:59:04.647235  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.664815  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.765922  592938 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:59:04.769390  592938 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:59:04.769423  592938 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:59:04.769436  592938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:59:04.769480  592938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:59:04.769548  592938 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:59:04.769627  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:59:04.776940  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:59:04.795617  592938 start.go:296] duration metric: took 148.508431ms for postStartSetup
	I1124 13:59:04.795978  592938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-456660
	I1124 13:59:04.812064  592938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/config.json ...
	I1124 13:59:04.812274  592938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:59:04.812322  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.827826  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.924622  592938 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:59:04.930001  592938 start.go:128] duration metric: took 6.919762039s to createHost
	I1124 13:59:04.930025  592938 start.go:83] releasing machines lock for "embed-certs-456660", held for 6.91987827s
	I1124 13:59:04.930094  592938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-456660
	I1124 13:59:04.946964  592938 ssh_runner.go:195] Run: cat /version.json
	I1124 13:59:04.947005  592938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:59:04.947025  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.947063  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.963467  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.965071  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:05.059369  592938 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:05.130113  592938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:59:05.163625  592938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:59:05.168082  592938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:59:05.168151  592938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:59:05.192247  592938 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:59:05.192266  592938 start.go:496] detecting cgroup driver to use...
	I1124 13:59:05.192297  592938 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:59:05.192337  592938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:59:05.207714  592938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:59:05.219166  592938 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:59:05.219220  592938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:59:05.234695  592938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:59:05.250696  592938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:59:05.334293  592938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:59:05.420473  592938 docker.go:234] disabling docker service ...
	I1124 13:59:05.420529  592938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:59:05.438863  592938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:59:05.450467  592938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:59:05.531957  592938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:59:05.612589  592938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:59:05.625823  592938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:59:05.642160  592938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:59:05.642220  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.652601  592938 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:59:05.652667  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.661061  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.669339  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.678464  592938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:59:05.686359  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.695388  592938 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.709017  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.718414  592938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:59:05.726796  592938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:59:05.735193  592938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:05.821470  592938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:59:05.969579  592938 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:59:05.969641  592938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:59:05.974045  592938 start.go:564] Will wait 60s for crictl version
	I1124 13:59:05.974102  592938 ssh_runner.go:195] Run: which crictl
	I1124 13:59:05.977856  592938 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:59:06.007830  592938 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:59:06.007938  592938 ssh_runner.go:195] Run: crio --version
	I1124 13:59:06.041964  592938 ssh_runner.go:195] Run: crio --version
	I1124 13:59:06.073780  592938 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:59:06.074910  592938 cli_runner.go:164] Run: docker network inspect embed-certs-456660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:06.091963  592938 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 13:59:06.096146  592938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:06.107259  592938 kubeadm.go:884] updating cluster {Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:59:06.107412  592938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:06.107470  592938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:06.139530  592938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:59:06.139547  592938 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:59:06.139587  592938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:06.164153  592938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:59:06.164173  592938 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:59:06.164183  592938 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 13:59:06.164285  592938 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-456660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:59:06.164359  592938 ssh_runner.go:195] Run: crio config
	I1124 13:59:06.217902  592938 cni.go:84] Creating CNI manager for ""
	I1124 13:59:06.217931  592938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:06.217954  592938 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:59:06.217985  592938 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-456660 NodeName:embed-certs-456660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:59:06.218176  592938 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-456660"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:59:06.218254  592938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:59:06.227084  592938 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:59:06.227141  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:59:06.234616  592938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 13:59:06.246608  592938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:59:06.260721  592938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 13:59:06.273156  592938 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:59:06.276572  592938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:06.286079  592938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:06.370120  592938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:06.398303  592938 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660 for IP: 192.168.85.2
	I1124 13:59:06.398326  592938 certs.go:195] generating shared ca certs ...
	I1124 13:59:06.398348  592938 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.398517  592938 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:59:06.398579  592938 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:59:06.398594  592938 certs.go:257] generating profile certs ...
	I1124 13:59:06.398666  592938 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.key
	I1124 13:59:06.398684  592938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.crt with IP's: []
	I1124 13:59:06.498548  592938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.crt ...
	I1124 13:59:06.498572  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.crt: {Name:mk5dbdd1018942be64d0fe41c2041c0eb09648bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.498730  592938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.key ...
	I1124 13:59:06.498742  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.key: {Name:mk26189c78849c49110708e44f9d67abb6861356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.498824  592938 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963
	I1124 13:59:06.498839  592938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 13:59:06.702165  592938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963 ...
	I1124 13:59:06.702188  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963: {Name:mk338aa06a2033dabf1e3f21ca21dd234c5e2dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.702334  592938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963 ...
	I1124 13:59:06.702346  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963: {Name:mk16a3b8cf229e5b8ae17a1b148cd2da7f8d36dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.702420  592938 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt
	I1124 13:59:06.702511  592938 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key
	I1124 13:59:06.702573  592938 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key
	I1124 13:59:06.702588  592938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt with IP's: []
	I1124 13:59:06.788430  592938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt ...
	I1124 13:59:06.788451  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt: {Name:mkc236e2b30f7bc3ca90b5f1d5921c0b1bc83492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.788585  592938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key ...
	I1124 13:59:06.788597  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key: {Name:mkba03e70aa406aca0eb283f1580bfcdbc075759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.788763  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 13:59:06.788800  592938 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 13:59:06.788811  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:59:06.788833  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:59:06.788858  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:59:06.788880  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:59:06.788946  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:59:06.789502  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:59:06.807160  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:59:06.829000  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:59:06.845563  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:59:06.862112  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 13:59:06.878601  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:59:06.895283  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:59:06.911984  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:59:06.928783  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:59:06.948042  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 13:59:06.966342  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 13:59:06.985091  592938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:59:06.997622  592938 ssh_runner.go:195] Run: openssl version
	I1124 13:59:07.003945  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 13:59:07.011498  592938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 13:59:07.015014  592938 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 13:59:07.015062  592938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 13:59:07.049708  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 13:59:07.057624  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 13:59:07.065520  592938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 13:59:07.068918  592938 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 13:59:07.068976  592938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 13:59:07.106713  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:59:07.115433  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:59:07.123951  592938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:07.128215  592938 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:07.128270  592938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:07.163452  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:59:07.171437  592938 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:59:07.174773  592938 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:59:07.174822  592938 kubeadm.go:401] StartCluster: {Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:07.174902  592938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:59:07.174943  592938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:59:07.201086  592938 cri.go:89] found id: ""
	I1124 13:59:07.201141  592938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:59:07.210584  592938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:59:07.219785  592938 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:59:07.219836  592938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:59:07.227418  592938 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:59:07.227435  592938 kubeadm.go:158] found existing configuration files:
	
	I1124 13:59:07.227481  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:59:07.236143  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:59:07.236191  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:59:07.244016  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:59:07.251806  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:59:07.251853  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:59:07.259475  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:59:07.266732  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:59:07.266771  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:59:07.273760  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:59:07.281166  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:59:07.281211  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:59:07.288071  592938 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:59:07.326234  592938 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:59:07.326327  592938 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:59:07.348023  592938 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:59:07.348115  592938 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:59:07.348165  592938 kubeadm.go:319] OS: Linux
	I1124 13:59:07.348229  592938 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:59:07.348295  592938 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:59:07.348364  592938 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:59:07.348432  592938 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:59:07.348500  592938 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:59:07.348565  592938 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:59:07.348631  592938 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:59:07.348691  592938 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:59:07.406542  592938 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:59:07.406686  592938 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:59:07.406833  592938 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:59:07.414244  592938 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:59:07.415800  592938 out.go:252]   - Generating certificates and keys ...
	I1124 13:59:07.415883  592938 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:59:07.415992  592938 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:59:07.501311  592938 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:59:07.714621  592938 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	
	
	==> CRI-O <==
	Nov 24 13:58:50 no-preload-495729 crio[572]: time="2025-11-24T13:58:50.548544361Z" level=info msg="Created container 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=e4a5568b-a4eb-4dbc-adbe-cf0b94e548a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:50 no-preload-495729 crio[572]: time="2025-11-24T13:58:50.549152149Z" level=info msg="Starting container: 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995" id=189d45d5-36ed-4a98-bc5d-b063e41eb230 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:50 no-preload-495729 crio[572]: time="2025-11-24T13:58:50.551339759Z" level=info msg="Started container" PID=1665 containerID=0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper id=189d45d5-36ed-4a98-bc5d-b063e41eb230 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd48bd6794798f0d880f6d7affc69ed03e9f5aa7a993b803f374e7dee55da602
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.377958963Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2fb0f4df-023e-41c5-9750-31b3be2691e3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.395815191Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7e8f5e9d-7641-4c18-9c31-1e0bf134680e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.406286047Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=bcdc6559-290b-490e-9b1a-35b4fba6d741 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.406537747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.496596117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.497392329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.579959775Z" level=info msg="Created container cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=bcdc6559-290b-490e-9b1a-35b4fba6d741 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.580910215Z" level=info msg="Starting container: cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6" id=7aaeea88-dc32-4678-b3b3-2e38adc5dd2b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.583614633Z" level=info msg="Started container" PID=1682 containerID=cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper id=7aaeea88-dc32-4678-b3b3-2e38adc5dd2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd48bd6794798f0d880f6d7affc69ed03e9f5aa7a993b803f374e7dee55da602
	Nov 24 13:58:52 no-preload-495729 crio[572]: time="2025-11-24T13:58:52.384450187Z" level=info msg="Removing container: 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995" id=7a2cd042-74fd-4c8b-b1ac-74bb149a7c32 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:58:52 no-preload-495729 crio[572]: time="2025-11-24T13:58:52.394807853Z" level=info msg="Removed container 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=7a2cd042-74fd-4c8b-b1ac-74bb149a7c32 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.869026446Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=7bf824dd-ebf6-4dd9-9707-bc49ee3e9b35 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.869611309Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dbadbb5f-2183-480a-98dd-149038828dd8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.871027458Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=79ecdb30-7dc7-49b7-b62d-da36c156671a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.874234504Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk/kubernetes-dashboard" id=f6a759e5-b374-48e6-91f2-d85d3da8331b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.874369531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.878181099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.878428845Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c3e734a06e43d7346148d9a97a2350971ee3115178d587da71c3e4dd74a510c/merged/etc/group: no such file or directory"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.878857552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.908068103Z" level=info msg="Created container 9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk/kubernetes-dashboard" id=f6a759e5-b374-48e6-91f2-d85d3da8331b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.908568691Z" level=info msg="Starting container: 9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2" id=bfa01e1f-4a00-4c97-a80b-ea20210a5c02 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.910330513Z" level=info msg="Started container" PID=1743 containerID=9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk/kubernetes-dashboard id=bfa01e1f-4a00-4c97-a80b-ea20210a5c02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98a77350050120ef00624b1cd984b628ee97e0dff383d0a339c372426984c1b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9a185bbd7db92       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   15 seconds ago      Running             kubernetes-dashboard        0                   98a7735005012       kubernetes-dashboard-855c9754f9-xvfgk        kubernetes-dashboard
	cf1720169940a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           17 seconds ago      Exited              dashboard-metrics-scraper   1                   bd48bd6794798       dashboard-metrics-scraper-6ffb444bf9-r86vh   kubernetes-dashboard
	ab498d86243e8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           28 seconds ago      Running             busybox                     1                   578a6b4c26cfd       busybox                                      default
	f8ae90386bcd1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           28 seconds ago      Running             coredns                     0                   01dea1db56b46       coredns-66bc5c9577-b7t2v                     kube-system
	2a07af39c8cfa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           31 seconds ago      Running             kube-proxy                  0                   e6c97536c6b64       kube-proxy-mxzvp                             kube-system
	0e39be3eda514       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           31 seconds ago      Exited              storage-provisioner         0                   c403021447e90       storage-provisioner                          kube-system
	903c84e46ccde       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           31 seconds ago      Running             kindnet-cni                 0                   65033d721a272       kindnet-mtrx6                                kube-system
	c5414bcf8f37e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           34 seconds ago      Running             kube-apiserver              0                   f9494f0e364c9       kube-apiserver-no-preload-495729             kube-system
	62030928e1d17       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           34 seconds ago      Running             kube-scheduler              0                   9fd49b2cadb2a       kube-scheduler-no-preload-495729             kube-system
	fe3faa18594ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           34 seconds ago      Running             kube-controller-manager     0                   2a662a0321258       kube-controller-manager-no-preload-495729    kube-system
	157c65960e2b0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           34 seconds ago      Running             etcd                        0                   180cc6cf78bac       etcd-no-preload-495729                       kube-system
	
	
	==> coredns [f8ae90386bcd1263701dc3942191b95109eee84dbc38e65217bebf360af1be31] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49022 - 26097 "HINFO IN 7324422859843716833.8518420187868997806. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094296095s
	
	
	==> describe nodes <==
	Name:               no-preload-495729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-495729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-495729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_57_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-495729
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:58:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:58:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-495729
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                b9ead28d-5d73-474f-b9bc-4fe7bfd306f8
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-66bc5c9577-b7t2v                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     84s
	  kube-system                 etcd-no-preload-495729                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         90s
	  kube-system                 kindnet-mtrx6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      85s
	  kube-system                 kube-apiserver-no-preload-495729              250m (3%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-controller-manager-no-preload-495729     200m (2%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 kube-proxy-mxzvp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         85s
	  kube-system                 kube-scheduler-no-preload-495729              100m (1%)     0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         84s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r86vh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xvfgk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 83s                kube-proxy       
	  Normal  Starting                 31s                kube-proxy       
	  Normal  Starting                 95s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  95s (x8 over 95s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    95s (x8 over 95s)  kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     95s (x8 over 95s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    90s                kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  90s                kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     90s                kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  Starting                 90s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           86s                node-controller  Node no-preload-495729 event: Registered Node no-preload-495729 in Controller
	  Normal  NodeReady                71s                kubelet          Node no-preload-495729 status is now: NodeReady
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 35s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 35s)  kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 35s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           29s                node-controller  Node no-preload-495729 event: Registered Node no-preload-495729 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [157c65960e2b04d4d57edcc130777d480830cee904e929103df8bc888e89eb35] <==
	{"level":"warn","ts":"2025-11-24T13:58:36.220287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.225844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.231881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.238383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.245395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.251227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.257186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.263178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.269617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.276224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.282242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.288095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.294185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.300504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.306285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.323319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.328826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.335403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.381684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49088","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:58:48.891607Z","caller":"traceutil/trace.go:172","msg":"trace[1029022935] linearizableReadLoop","detail":"{readStateIndex:653; appliedIndex:653; }","duration":"113.058664ms","start":"2025-11-24T13:58:48.778523Z","end":"2025-11-24T13:58:48.891582Z","steps":["trace[1029022935] 'read index received'  (duration: 113.050438ms)","trace[1029022935] 'applied index is now lower than readState.Index'  (duration: 6.916µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:58:48.891750Z","caller":"traceutil/trace.go:172","msg":"trace[1104860547] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"115.55741ms","start":"2025-11-24T13:58:48.776180Z","end":"2025-11-24T13:58:48.891738Z","steps":["trace[1104860547] 'process raft request'  (duration: 115.437472ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:58:48.891776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.228471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-495729\" limit:1 ","response":"range_response_count:1 size:4853"}
	{"level":"info","ts":"2025-11-24T13:58:48.891828Z","caller":"traceutil/trace.go:172","msg":"trace[1769369819] range","detail":"{range_begin:/registry/minions/no-preload-495729; range_end:; response_count:1; response_revision:623; }","duration":"113.301914ms","start":"2025-11-24T13:58:48.778518Z","end":"2025-11-24T13:58:48.891820Z","steps":["trace[1769369819] 'agreement among raft nodes before linearized reading'  (duration: 113.153705ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:58:49.108319Z","caller":"traceutil/trace.go:172","msg":"trace[1429464594] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"159.060502ms","start":"2025-11-24T13:58:48.949243Z","end":"2025-11-24T13:58:49.108304Z","steps":["trace[1429464594] 'process raft request'  (duration: 130.273441ms)","trace[1429464594] 'compare'  (duration: 28.698889ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:58:49.306190Z","caller":"traceutil/trace.go:172","msg":"trace[1802569450] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"110.432941ms","start":"2025-11-24T13:58:49.195727Z","end":"2025-11-24T13:58:49.306160Z","steps":["trace[1802569450] 'process raft request'  (duration: 100.438185ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:59:09 up  2:41,  0 user,  load average: 1.72, 2.79, 1.93
	Linux no-preload-495729 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [903c84e46ccdef1d6896c69c09ebe9a3407439b6081697d0a3f3cb40af80da77] <==
	I1124 13:58:37.937855       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:58:37.938130       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:58:37.938277       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:58:37.938301       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:58:37.938342       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:58:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:58:38.137679       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:58:38.137714       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:58:38.137727       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:58:38.137860       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:58:38.437937       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:58:38.437966       1 metrics.go:72] Registering metrics
	I1124 13:58:38.438119       1 controller.go:711] "Syncing nftables rules"
	I1124 13:58:48.049515       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:58:48.049580       1 main.go:301] handling current node
	I1124 13:58:58.052855       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:58:58.052905       1 main.go:301] handling current node
	I1124 13:59:08.058966       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:59:08.058990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c5414bcf8f37eefc1509b423927ff2e9afae879fa089646bfe236c7e8838f941] <==
	I1124 13:58:36.832814       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 13:58:36.832975       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 13:58:36.833078       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 13:58:36.833119       1 aggregator.go:171] initial CRD sync complete...
	I1124 13:58:36.833135       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 13:58:36.833191       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:58:36.833251       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:58:36.833442       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 13:58:36.833490       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:58:36.833563       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 13:58:36.833562       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 13:58:36.839986       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:58:36.840578       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 13:58:36.868053       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:58:37.055787       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:58:37.079561       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:58:37.096771       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:58:37.102256       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:58:37.111813       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:58:37.138441       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.48.248"}
	I1124 13:58:37.147429       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.230.82"}
	I1124 13:58:37.735343       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:58:40.516532       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:58:40.567137       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:58:40.766729       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fe3faa18594eff20855f0be9dc75861d22f7a99057fb8fd3c24ff01eaf028868] <==
	I1124 13:58:40.146485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:58:40.163923       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 13:58:40.163945       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:58:40.164005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:58:40.164021       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:58:40.164026       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:58:40.164157       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:58:40.164190       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:58:40.164231       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:58:40.164252       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:58:40.164343       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:58:40.164430       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:58:40.164431       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:58:40.165105       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:58:40.166906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:58:40.167461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:58:40.167779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:58:40.168956       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:58:40.169052       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:58:40.172305       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:58:40.176500       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:58:40.177691       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:58:40.179978       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 13:58:40.188251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:58:50.098051       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2a07af39c8cfa9f54acb40356ea5d9c755cef6b9908261e0552df8388a4e4b5b] <==
	I1124 13:58:37.693456       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:58:37.771665       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:58:37.872617       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:58:37.872653       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:58:37.872753       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:58:37.893798       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:58:37.893855       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:58:37.899769       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:58:37.900585       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:58:37.900610       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:37.903432       1 config.go:200] "Starting service config controller"
	I1124 13:58:37.903463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:58:37.903801       1 config.go:309] "Starting node config controller"
	I1124 13:58:37.903812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:58:37.903818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:58:37.903929       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:58:37.903943       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:58:37.903964       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:58:37.903969       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:58:38.004032       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:58:38.004056       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:58:38.004083       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [62030928e1d177cbf0ad8f12916eb88c433896f670073ef28f5784598ad3be2b] <==
	I1124 13:58:35.364600       1 serving.go:386] Generated self-signed cert in-memory
	W1124 13:58:36.762177       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 13:58:36.762205       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 13:58:36.762231       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 13:58:36.762241       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 13:58:36.802077       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 13:58:36.802233       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:36.805276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:58:36.805375       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:58:36.806337       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:58:36.806438       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 13:58:36.906009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.068998     722 projected.go:196] Error preparing data for projected volume kube-api-access-8mkhf for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.069070     722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf podName:bf3a1272-92ff-45db-ba2f-8e360dd19c97 nodeName:}" failed. No retries permitted until 2025-11-24 13:58:39.069053101 +0000 UTC m=+4.832049123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mkhf" (UniqueName: "kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf") pod "busybox" (UID: "bf3a1272-92ff-45db-ba2f-8e360dd19c97") : object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.973286     722 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.973362     722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfd3642f-4fab-4d58-ac21-5c59c0820cb6-config-volume podName:cfd3642f-4fab-4d58-ac21-5c59c0820cb6 nodeName:}" failed. No retries permitted until 2025-11-24 13:58:40.973349133 +0000 UTC m=+6.736345142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cfd3642f-4fab-4d58-ac21-5c59c0820cb6-config-volume") pod "coredns-66bc5c9577-b7t2v" (UID: "cfd3642f-4fab-4d58-ac21-5c59c0820cb6") : object "kube-system"/"coredns" not registered
	Nov 24 13:58:39 no-preload-495729 kubelet[722]: E1124 13:58:39.073692     722 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:39 no-preload-495729 kubelet[722]: E1124 13:58:39.073721     722 projected.go:196] Error preparing data for projected volume kube-api-access-8mkhf for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:39 no-preload-495729 kubelet[722]: E1124 13:58:39.073807     722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf podName:bf3a1272-92ff-45db-ba2f-8e360dd19c97 nodeName:}" failed. No retries permitted until 2025-11-24 13:58:41.073785275 +0000 UTC m=+6.836781305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mkhf" (UniqueName: "kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf") pod "busybox" (UID: "bf3a1272-92ff-45db-ba2f-8e360dd19c97") : object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.120859     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bgt\" (UniqueName: \"kubernetes.io/projected/885596b0-37d2-4c9a-9577-ac17e3e35b79-kube-api-access-96bgt\") pod \"kubernetes-dashboard-855c9754f9-xvfgk\" (UID: \"885596b0-37d2-4c9a-9577-ac17e3e35b79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk"
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.120924     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tg5\" (UniqueName: \"kubernetes.io/projected/4d95054b-09c8-44da-8982-1c48abf3f219-kube-api-access-84tg5\") pod \"dashboard-metrics-scraper-6ffb444bf9-r86vh\" (UID: \"4d95054b-09c8-44da-8982-1c48abf3f219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh"
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.120969     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4d95054b-09c8-44da-8982-1c48abf3f219-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-r86vh\" (UID: \"4d95054b-09c8-44da-8982-1c48abf3f219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh"
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.121002     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/885596b0-37d2-4c9a-9577-ac17e3e35b79-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xvfgk\" (UID: \"885596b0-37d2-4c9a-9577-ac17e3e35b79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk"
	Nov 24 13:58:48 no-preload-495729 kubelet[722]: I1124 13:58:48.762322     722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 13:58:51 no-preload-495729 kubelet[722]: I1124 13:58:51.377361     722 scope.go:117] "RemoveContainer" containerID="0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995"
	Nov 24 13:58:52 no-preload-495729 kubelet[722]: I1124 13:58:52.383015     722 scope.go:117] "RemoveContainer" containerID="0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995"
	Nov 24 13:58:52 no-preload-495729 kubelet[722]: I1124 13:58:52.383120     722 scope.go:117] "RemoveContainer" containerID="cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	Nov 24 13:58:52 no-preload-495729 kubelet[722]: E1124 13:58:52.383284     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r86vh_kubernetes-dashboard(4d95054b-09c8-44da-8982-1c48abf3f219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh" podUID="4d95054b-09c8-44da-8982-1c48abf3f219"
	Nov 24 13:58:53 no-preload-495729 kubelet[722]: I1124 13:58:53.387639     722 scope.go:117] "RemoveContainer" containerID="cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	Nov 24 13:58:53 no-preload-495729 kubelet[722]: E1124 13:58:53.387816     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r86vh_kubernetes-dashboard(4d95054b-09c8-44da-8982-1c48abf3f219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh" podUID="4d95054b-09c8-44da-8982-1c48abf3f219"
	Nov 24 13:58:54 no-preload-495729 kubelet[722]: I1124 13:58:54.403486     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk" podStartSLOduration=7.819098572 podStartE2EDuration="14.40346653s" podCreationTimestamp="2025-11-24 13:58:40 +0000 UTC" firstStartedPulling="2025-11-24 13:58:47.286157499 +0000 UTC m=+13.049153521" lastFinishedPulling="2025-11-24 13:58:53.870525469 +0000 UTC m=+19.633521479" observedRunningTime="2025-11-24 13:58:54.40315408 +0000 UTC m=+20.166150113" watchObservedRunningTime="2025-11-24 13:58:54.40346653 +0000 UTC m=+20.166462561"
	Nov 24 13:58:57 no-preload-495729 kubelet[722]: I1124 13:58:57.257485     722 scope.go:117] "RemoveContainer" containerID="cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	Nov 24 13:58:57 no-preload-495729 kubelet[722]: E1124 13:58:57.257647     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r86vh_kubernetes-dashboard(4d95054b-09c8-44da-8982-1c48abf3f219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh" podUID="4d95054b-09c8-44da-8982-1c48abf3f219"
	Nov 24 13:59:06 no-preload-495729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 13:59:06 no-preload-495729 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 13:59:06 no-preload-495729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 13:59:06 no-preload-495729 systemd[1]: kubelet.service: Consumed 1.134s CPU time.
	
	
	==> kubernetes-dashboard [9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2] <==
	2025/11/24 13:58:53 Using namespace: kubernetes-dashboard
	2025/11/24 13:58:53 Using in-cluster config to connect to apiserver
	2025/11/24 13:58:53 Using secret token for csrf signing
	2025/11/24 13:58:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 13:58:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 13:58:53 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 13:58:53 Generating JWE encryption key
	2025/11/24 13:58:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 13:58:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 13:58:54 Initializing JWE encryption key from synchronized object
	2025/11/24 13:58:54 Creating in-cluster Sidecar client
	2025/11/24 13:58:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 13:58:54 Serving insecurely on HTTP port: 9090
	2025/11/24 13:58:53 Starting overwatch
	
	
	==> storage-provisioner [0e39be3eda51444362a2e29b8512e0cd1c604619e642090db9c1ba4832ceac50] <==
	I1124 13:58:37.666131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 13:59:07.667947       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-495729 -n no-preload-495729
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-495729 -n no-preload-495729: exit status 2 (352.13051ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-495729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-495729
helpers_test.go:243: (dbg) docker inspect no-preload-495729:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791",
	        "Created": "2025-11-24T13:57:11.035074993Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 587288,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:58:28.316605422Z",
	            "FinishedAt": "2025-11-24T13:58:27.452360541Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/hostname",
	        "HostsPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/hosts",
	        "LogPath": "/var/lib/docker/containers/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791/93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791-json.log",
	        "Name": "/no-preload-495729",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-495729:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-495729",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "93c1bfb2fd2b621a0bec0b1d527f22cea7f75c06c122e690d536a96be4f3a791",
	                "LowerDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/merged",
	                "UpperDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/diff",
	                "WorkDir": "/var/lib/docker/overlay2/cf0d29957ea77cc1b3192bc6ff101210d9f3df00649b7e5c1defd8454175840b/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-495729",
	                "Source": "/var/lib/docker/volumes/no-preload-495729/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-495729",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-495729",
	                "name.minikube.sigs.k8s.io": "no-preload-495729",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "6b37304afa7aa2a53a434e095496b7dac05da45a8d316120ce156dd372326d47",
	            "SandboxKey": "/var/run/docker/netns/6b37304afa7a",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33443"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33444"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33447"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33445"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33446"
	                    }
	                ]
	            },
	            "Networks": {
	                "no-preload-495729": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "160c86453933d759975010a4980c48a41dc82dff079fabd600f1a15b1aa5b6c8",
	                    "EndpointID": "ff301e1f9c5f1022bdaa770f8b655d1a873cf11e15cfbd32c37c72c260ae776f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "9e:1c:b4:3f:6a:f2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-495729",
	                        "93c1bfb2fd2b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729: exit status 2 (320.756112ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-495729 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-495729 logs -n 25: (1.057091511s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬────────────────────
─┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼────────────────────
─┤
	│ ssh     │ -p cilium-165759 sudo cri-dockerd --version                                                                                                                                                                                                   │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo containerd config dump                                                                                                                                                                                                  │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo crio config                                                                                                                                                                                                             │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ delete  │ -p cilium-165759                                                                                                                                                                                                                              │ cilium-165759          │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:57 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p old-k8s-version-551674 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p no-preload-495729 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-551674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-551674 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341 │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660     │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729      │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴────────────────────
─┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:58:57
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:58:57.832535  592938 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:58:57.832784  592938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:57.832794  592938 out.go:374] Setting ErrFile to fd 2...
	I1124 13:58:57.832798  592938 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:58:57.833029  592938 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:58:57.833459  592938 out.go:368] Setting JSON to false
	I1124 13:58:57.834575  592938 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9685,"bootTime":1763983053,"procs":343,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:58:57.834626  592938 start.go:143] virtualization: kvm guest
	I1124 13:58:57.836330  592938 out.go:179] * [embed-certs-456660] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:58:57.837507  592938 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:58:57.837523  592938 notify.go:221] Checking for updates...
	I1124 13:58:57.839921  592938 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:58:57.841058  592938 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:58:57.842257  592938 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:58:57.843359  592938 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:58:57.844387  592938 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:58:57.845792  592938 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:58:57.845882  592938 config.go:182] Loaded profile config "no-preload-495729": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:58:57.845974  592938 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:58:57.846067  592938 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:58:57.871196  592938 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:58:57.871338  592938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:58:57.922978  592938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:58:57.913490696 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:58:57.923135  592938 docker.go:319] overlay module found
	I1124 13:58:57.924579  592938 out.go:179] * Using the docker driver based on user configuration
	I1124 13:58:57.925832  592938 start.go:309] selected driver: docker
	I1124 13:58:57.925846  592938 start.go:927] validating driver "docker" against <nil>
	I1124 13:58:57.925861  592938 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:58:57.926462  592938 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:58:57.983452  592938 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:58:57.974403576 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:58:57.983659  592938 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:58:57.983967  592938 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:58:57.985479  592938 out.go:179] * Using Docker driver with root privileges
	I1124 13:58:57.986499  592938 cni.go:84] Creating CNI manager for ""
	I1124 13:58:57.986562  592938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:58:57.986573  592938 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:58:57.986629  592938 start.go:353] cluster config:
	{Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Contain
erRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPI
D:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:58:57.987825  592938 out.go:179] * Starting "embed-certs-456660" primary control-plane node in "embed-certs-456660" cluster
	I1124 13:58:57.988771  592938 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:58:57.989832  592938 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:58:57.990822  592938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:58:57.990848  592938 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:58:57.990855  592938 cache.go:65] Caching tarball of preloaded images
	I1124 13:58:57.990928  592938 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:58:57.990965  592938 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:58:57.990975  592938 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:58:57.991053  592938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/config.json ...
	I1124 13:58:57.991070  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/config.json: {Name:mkca432673f6444f2dbfb76f82936003fbc963b9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:58:58.009966  592938 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:58:58.009988  592938 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:58:58.010006  592938 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:58:58.010051  592938 start.go:360] acquireMachinesLock for embed-certs-456660: {Name:mkcb8e616ba1a5200ca9ad17486605327176dd6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:58:58.010136  592938 start.go:364] duration metric: took 70.207µs to acquireMachinesLock for "embed-certs-456660"
	I1124 13:58:58.010158  592938 start.go:93] Provisioning new machine with config: &{Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:58:58.010225  592938 start.go:125] createHost starting for "" (driver="docker")
	W1124 13:58:56.024633  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	W1124 13:58:58.025441  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	I1124 13:58:55.282243  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1124 13:58:55.282319  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:58:55.282387  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:58:55.313555  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:58:55.313575  549693 cri.go:89] found id: "dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	I1124 13:58:55.313579  549693 cri.go:89] found id: ""
	I1124 13:58:55.313587  549693 logs.go:282] 2 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338]
	I1124 13:58:55.313636  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.318598  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.322381  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:58:55.322446  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:58:55.349491  549693 cri.go:89] found id: ""
	I1124 13:58:55.349514  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.349524  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:58:55.349531  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:58:55.349585  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:58:55.376967  549693 cri.go:89] found id: ""
	I1124 13:58:55.376993  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.377003  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:58:55.377011  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:58:55.377062  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:58:55.402361  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:58:55.402385  549693 cri.go:89] found id: ""
	I1124 13:58:55.402395  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:58:55.402444  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.406090  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:58:55.406150  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:58:55.431275  549693 cri.go:89] found id: ""
	I1124 13:58:55.431298  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.431308  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:58:55.431315  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:58:55.431362  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:58:55.457228  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:58:55.457253  549693 cri.go:89] found id: "cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:58:55.457259  549693 cri.go:89] found id: ""
	I1124 13:58:55.457269  549693 logs.go:282] 2 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573]
	I1124 13:58:55.457321  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.461016  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:58:55.464625  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:58:55.464689  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:58:55.489953  549693 cri.go:89] found id: ""
	I1124 13:58:55.489977  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.489986  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:58:55.489993  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:58:55.490053  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:58:55.514315  549693 cri.go:89] found id: ""
	I1124 13:58:55.514337  549693 logs.go:282] 0 containers: []
	W1124 13:58:55.514347  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:58:55.514366  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:58:55.514382  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:58:55.540243  549693 logs.go:123] Gathering logs for kube-controller-manager [cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573] ...
	I1124 13:58:55.540268  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:58:55.566382  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:58:55.566403  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:58:55.650749  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:58:55.650774  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1124 13:58:58.012302  592938 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:58:58.012480  592938 start.go:159] libmachine.API.Create for "embed-certs-456660" (driver="docker")
	I1124 13:58:58.012506  592938 client.go:173] LocalClient.Create starting
	I1124 13:58:58.012592  592938 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:58:58.012621  592938 main.go:143] libmachine: Decoding PEM data...
	I1124 13:58:58.012638  592938 main.go:143] libmachine: Parsing certificate...
	I1124 13:58:58.012684  592938 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:58:58.012705  592938 main.go:143] libmachine: Decoding PEM data...
	I1124 13:58:58.012718  592938 main.go:143] libmachine: Parsing certificate...
	I1124 13:58:58.013087  592938 cli_runner.go:164] Run: docker network inspect embed-certs-456660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:58:58.028580  592938 cli_runner.go:211] docker network inspect embed-certs-456660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:58:58.028637  592938 network_create.go:284] running [docker network inspect embed-certs-456660] to gather additional debugging logs...
	I1124 13:58:58.028655  592938 cli_runner.go:164] Run: docker network inspect embed-certs-456660
	W1124 13:58:58.044814  592938 cli_runner.go:211] docker network inspect embed-certs-456660 returned with exit code 1
	I1124 13:58:58.044836  592938 network_create.go:287] error running [docker network inspect embed-certs-456660]: docker network inspect embed-certs-456660: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network embed-certs-456660 not found
	I1124 13:58:58.044855  592938 network_create.go:289] output of [docker network inspect embed-certs-456660]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network embed-certs-456660 not found
	
	** /stderr **
	I1124 13:58:58.044983  592938 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:58:58.061306  592938 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 13:58:58.062125  592938 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 13:58:58.062591  592938 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 13:58:58.063158  592938 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-283ea71f66a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:70:12:a2:88:dd} reservation:<nil>}
	I1124 13:58:58.064069  592938 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ed18b0}
	I1124 13:58:58.064102  592938 network_create.go:124] attempt to create docker network embed-certs-456660 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1124 13:58:58.064156  592938 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=embed-certs-456660 embed-certs-456660
	I1124 13:58:58.110444  592938 network_create.go:108] docker network embed-certs-456660 192.168.85.0/24 created
	I1124 13:58:58.110471  592938 kic.go:121] calculated static IP "192.168.85.2" for the "embed-certs-456660" container
	I1124 13:58:58.110535  592938 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:58:58.127093  592938 cli_runner.go:164] Run: docker volume create embed-certs-456660 --label name.minikube.sigs.k8s.io=embed-certs-456660 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:58:58.143940  592938 oci.go:103] Successfully created a docker volume embed-certs-456660
	I1124 13:58:58.144007  592938 cli_runner.go:164] Run: docker run --rm --name embed-certs-456660-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-456660 --entrypoint /usr/bin/test -v embed-certs-456660:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:58:58.528917  592938 oci.go:107] Successfully prepared a docker volume embed-certs-456660
	I1124 13:58:58.528984  592938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:58:58.528994  592938 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:58:58.529053  592938 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-456660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	W1124 13:59:00.525707  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	W1124 13:59:02.558153  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	I1124 13:59:02.862538  592938 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v embed-certs-456660:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.333444773s)
	I1124 13:59:02.862585  592938 kic.go:203] duration metric: took 4.333571339s to extract preloaded images to volume ...
	W1124 13:59:02.862682  592938 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:59:02.862738  592938 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:59:02.862787  592938 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:59:02.917704  592938 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname embed-certs-456660 --name embed-certs-456660 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=embed-certs-456660 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=embed-certs-456660 --network embed-certs-456660 --ip 192.168.85.2 --volume embed-certs-456660:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:59:03.226966  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Running}}
	I1124 13:59:03.243537  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:03.260968  592938 cli_runner.go:164] Run: docker exec embed-certs-456660 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:59:03.307840  592938 oci.go:144] the created container "embed-certs-456660" has a running status.
	I1124 13:59:03.307871  592938 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa...
	I1124 13:59:03.477717  592938 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:59:03.503630  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:03.524364  592938 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:59:03.524392  592938 kic_runner.go:114] Args: [docker exec --privileged embed-certs-456660 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:59:03.581120  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:03.601366  592938 machine.go:94] provisionDockerMachine start ...
	I1124 13:59:03.601448  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:03.619901  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:03.620336  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:03.620361  592938 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:59:03.764471  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-456660
	
	I1124 13:59:03.764503  592938 ubuntu.go:182] provisioning hostname "embed-certs-456660"
	I1124 13:59:03.764577  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:03.783422  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:03.783695  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:03.783714  592938 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-456660 && echo "embed-certs-456660" | sudo tee /etc/hostname
	I1124 13:59:03.935840  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-456660
	
	I1124 13:59:03.935949  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:03.952825  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:03.953098  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:03.953118  592938 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-456660' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-456660/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-456660' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:59:04.092930  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:59:04.092975  592938 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:59:04.093013  592938 ubuntu.go:190] setting up certificates
	I1124 13:59:04.093033  592938 provision.go:84] configureAuth start
	I1124 13:59:04.093081  592938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-456660
	I1124 13:59:04.109184  592938 provision.go:143] copyHostCerts
	I1124 13:59:04.109236  592938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:59:04.109245  592938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:59:04.109308  592938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:59:04.109387  592938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:59:04.109395  592938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:59:04.109420  592938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:59:04.109474  592938 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:59:04.109482  592938 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:59:04.109503  592938 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:59:04.109553  592938 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.embed-certs-456660 san=[127.0.0.1 192.168.85.2 embed-certs-456660 localhost minikube]
	I1124 13:59:04.180674  592938 provision.go:177] copyRemoteCerts
	I1124 13:59:04.180723  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:59:04.180766  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.197050  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.296652  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:59:04.315921  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I1124 13:59:04.333432  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:59:04.350492  592938 provision.go:87] duration metric: took 257.44313ms to configureAuth
	I1124 13:59:04.350515  592938 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:59:04.350689  592938 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:04.350815  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.367314  592938 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:04.367554  592938 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33448 <nil> <nil>}
	I1124 13:59:04.367591  592938 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:59:04.647029  592938 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:59:04.647057  592938 machine.go:97] duration metric: took 1.045667903s to provisionDockerMachine
	I1124 13:59:04.647067  592938 client.go:176] duration metric: took 6.634553448s to LocalClient.Create
	I1124 13:59:04.647087  592938 start.go:167] duration metric: took 6.634607347s to libmachine.API.Create "embed-certs-456660"
	I1124 13:59:04.647097  592938 start.go:293] postStartSetup for "embed-certs-456660" (driver="docker")
	I1124 13:59:04.647110  592938 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:59:04.647183  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:59:04.647235  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.664815  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.765922  592938 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:59:04.769390  592938 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:59:04.769423  592938 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:59:04.769436  592938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:59:04.769480  592938 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:59:04.769548  592938 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:59:04.769627  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:59:04.776940  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:59:04.795617  592938 start.go:296] duration metric: took 148.508431ms for postStartSetup
	I1124 13:59:04.795978  592938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-456660
	I1124 13:59:04.812064  592938 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/config.json ...
	I1124 13:59:04.812274  592938 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:59:04.812322  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.827826  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.924622  592938 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:59:04.930001  592938 start.go:128] duration metric: took 6.919762039s to createHost
	I1124 13:59:04.930025  592938 start.go:83] releasing machines lock for "embed-certs-456660", held for 6.91987827s
	I1124 13:59:04.930094  592938 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-456660
	I1124 13:59:04.946964  592938 ssh_runner.go:195] Run: cat /version.json
	I1124 13:59:04.947005  592938 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:59:04.947025  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.947063  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:04.963467  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:04.965071  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:05.059369  592938 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:05.130113  592938 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:59:05.163625  592938 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:59:05.168082  592938 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:59:05.168151  592938 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:59:05.192247  592938 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:59:05.192266  592938 start.go:496] detecting cgroup driver to use...
	I1124 13:59:05.192297  592938 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:59:05.192337  592938 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:59:05.207714  592938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:59:05.219166  592938 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:59:05.219220  592938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:59:05.234695  592938 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:59:05.250696  592938 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:59:05.334293  592938 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:59:05.420473  592938 docker.go:234] disabling docker service ...
	I1124 13:59:05.420529  592938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:59:05.438863  592938 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:59:05.450467  592938 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:59:05.531957  592938 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:59:05.612589  592938 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:59:05.625823  592938 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:59:05.642160  592938 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:59:05.642220  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.652601  592938 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:59:05.652667  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.661061  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.669339  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.678464  592938 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:59:05.686359  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.695388  592938 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.709017  592938 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:05.718414  592938 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:59:05.726796  592938 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:59:05.735193  592938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:05.821470  592938 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:59:05.969579  592938 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:59:05.969641  592938 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:59:05.974045  592938 start.go:564] Will wait 60s for crictl version
	I1124 13:59:05.974102  592938 ssh_runner.go:195] Run: which crictl
	I1124 13:59:05.977856  592938 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:59:06.007830  592938 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:59:06.007938  592938 ssh_runner.go:195] Run: crio --version
	I1124 13:59:06.041964  592938 ssh_runner.go:195] Run: crio --version
	I1124 13:59:06.073780  592938 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 13:59:06.074910  592938 cli_runner.go:164] Run: docker network inspect embed-certs-456660 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:06.091963  592938 ssh_runner.go:195] Run: grep 192.168.85.1	host.minikube.internal$ /etc/hosts
	I1124 13:59:06.096146  592938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.85.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:06.107259  592938 kubeadm.go:884] updating cluster {Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA AP
IServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 13:59:06.107412  592938 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:06.107470  592938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:06.139530  592938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:59:06.139547  592938 crio.go:433] Images already preloaded, skipping extraction
	I1124 13:59:06.139587  592938 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 13:59:06.164153  592938 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 13:59:06.164173  592938 cache_images.go:86] Images are preloaded, skipping loading
	I1124 13:59:06.164183  592938 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.34.1 crio true true} ...
	I1124 13:59:06.164285  592938 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=embed-certs-456660 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 13:59:06.164359  592938 ssh_runner.go:195] Run: crio config
	I1124 13:59:06.217902  592938 cni.go:84] Creating CNI manager for ""
	I1124 13:59:06.217931  592938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:06.217954  592938 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 13:59:06.217985  592938 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-456660 NodeName:embed-certs-456660 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 13:59:06.218176  592938 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-456660"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.85.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 13:59:06.218254  592938 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 13:59:06.227084  592938 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 13:59:06.227141  592938 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 13:59:06.234616  592938 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1124 13:59:06.246608  592938 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 13:59:06.260721  592938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1124 13:59:06.273156  592938 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1124 13:59:06.276572  592938 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 13:59:06.286079  592938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:06.370120  592938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:06.398303  592938 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660 for IP: 192.168.85.2
	I1124 13:59:06.398326  592938 certs.go:195] generating shared ca certs ...
	I1124 13:59:06.398348  592938 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.398517  592938 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 13:59:06.398579  592938 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 13:59:06.398594  592938 certs.go:257] generating profile certs ...
	I1124 13:59:06.398666  592938 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.key
	I1124 13:59:06.398684  592938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.crt with IP's: []
	I1124 13:59:06.498548  592938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.crt ...
	I1124 13:59:06.498572  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.crt: {Name:mk5dbdd1018942be64d0fe41c2041c0eb09648bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.498730  592938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.key ...
	I1124 13:59:06.498742  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/client.key: {Name:mk26189c78849c49110708e44f9d67abb6861356 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.498824  592938 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963
	I1124 13:59:06.498839  592938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1124 13:59:06.702165  592938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963 ...
	I1124 13:59:06.702188  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963: {Name:mk338aa06a2033dabf1e3f21ca21dd234c5e2dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.702334  592938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963 ...
	I1124 13:59:06.702346  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963: {Name:mk16a3b8cf229e5b8ae17a1b148cd2da7f8d36dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.702420  592938 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt.52895963 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt
	I1124 13:59:06.702511  592938 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key.52895963 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key
	I1124 13:59:06.702573  592938 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key
	I1124 13:59:06.702588  592938 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt with IP's: []
	I1124 13:59:06.788430  592938 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt ...
	I1124 13:59:06.788451  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt: {Name:mkc236e2b30f7bc3ca90b5f1d5921c0b1bc83492 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.788585  592938 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key ...
	I1124 13:59:06.788597  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key: {Name:mkba03e70aa406aca0eb283f1580bfcdbc075759 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:06.788763  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 13:59:06.788800  592938 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 13:59:06.788811  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 13:59:06.788833  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 13:59:06.788858  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 13:59:06.788880  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 13:59:06.788946  592938 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:59:06.789502  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 13:59:06.807160  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 13:59:06.829000  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 13:59:06.845563  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 13:59:06.862112  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I1124 13:59:06.878601  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 13:59:06.895283  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 13:59:06.911984  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/embed-certs-456660/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 13:59:06.928783  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 13:59:06.948042  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 13:59:06.966342  592938 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 13:59:06.985091  592938 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 13:59:06.997622  592938 ssh_runner.go:195] Run: openssl version
	I1124 13:59:07.003945  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 13:59:07.011498  592938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 13:59:07.015014  592938 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 13:59:07.015062  592938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 13:59:07.049708  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 13:59:07.057624  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 13:59:07.065520  592938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 13:59:07.068918  592938 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 13:59:07.068976  592938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 13:59:07.106713  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 13:59:07.115433  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 13:59:07.123951  592938 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:07.128215  592938 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:07.128270  592938 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 13:59:07.163452  592938 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 13:59:07.171437  592938 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 13:59:07.174773  592938 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 13:59:07.174822  592938 kubeadm.go:401] StartCluster: {Name:embed-certs-456660 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:embed-certs-456660 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:07.174902  592938 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 13:59:07.174943  592938 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 13:59:07.201086  592938 cri.go:89] found id: ""
	I1124 13:59:07.201141  592938 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 13:59:07.210584  592938 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 13:59:07.219785  592938 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 13:59:07.219836  592938 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 13:59:07.227418  592938 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 13:59:07.227435  592938 kubeadm.go:158] found existing configuration files:
	
	I1124 13:59:07.227481  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 13:59:07.236143  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 13:59:07.236191  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 13:59:07.244016  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 13:59:07.251806  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 13:59:07.251853  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 13:59:07.259475  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 13:59:07.266732  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 13:59:07.266771  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 13:59:07.273760  592938 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 13:59:07.281166  592938 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 13:59:07.281211  592938 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 13:59:07.288071  592938 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 13:59:07.326234  592938 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 13:59:07.326327  592938 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 13:59:07.348023  592938 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 13:59:07.348115  592938 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 13:59:07.348165  592938 kubeadm.go:319] OS: Linux
	I1124 13:59:07.348229  592938 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 13:59:07.348295  592938 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 13:59:07.348364  592938 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 13:59:07.348432  592938 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 13:59:07.348500  592938 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 13:59:07.348565  592938 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 13:59:07.348631  592938 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 13:59:07.348691  592938 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 13:59:07.406542  592938 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 13:59:07.406686  592938 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 13:59:07.406833  592938 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 13:59:07.414244  592938 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 13:59:07.415800  592938 out.go:252]   - Generating certificates and keys ...
	I1124 13:59:07.415883  592938 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 13:59:07.415992  592938 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 13:59:07.501311  592938 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 13:59:07.714621  592938 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	W1124 13:59:05.024796  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	W1124 13:59:07.025331  584845 pod_ready.go:104] pod "coredns-5dd5756b68-swk4w" is not "Ready", error: <nil>
	I1124 13:59:09.526065  584845 pod_ready.go:94] pod "coredns-5dd5756b68-swk4w" is "Ready"
	I1124 13:59:09.526091  584845 pod_ready.go:86] duration metric: took 38.506720121s for pod "coredns-5dd5756b68-swk4w" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:59:09.528983  584845 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:59:09.537202  584845 pod_ready.go:94] pod "etcd-old-k8s-version-551674" is "Ready"
	I1124 13:59:09.537224  584845 pod_ready.go:86] duration metric: took 8.21558ms for pod "etcd-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:59:09.540350  584845 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:59:09.544504  584845 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-551674" is "Ready"
	I1124 13:59:09.544530  584845 pod_ready.go:86] duration metric: took 4.150081ms for pod "kube-apiserver-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:59:09.547466  584845 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-551674" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 13:59:05.705335  549693 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": (10.054541563s)
	W1124 13:59:05.705381  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	Unable to connect to the server: net/http: TLS handshake timeout
	 output: 
	** stderr ** 
	Unable to connect to the server: net/http: TLS handshake timeout
	
	** /stderr **
	I1124 13:59:05.705406  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:05.705422  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:05.741745  549693 logs.go:123] Gathering logs for kube-apiserver [dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338] ...
	I1124 13:59:05.741778  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	I1124 13:59:05.780861  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:05.780899  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:05.837458  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:05.837485  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:05.869991  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:05.870025  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:05.890658  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:05.890691  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:08.449589  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:09.421379  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": read tcp 192.168.76.1:52496->192.168.76.2:8443: read: connection reset by peer
	I1124 13:59:09.421447  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:09.421510  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:09.453104  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:09.453120  549693 cri.go:89] found id: "dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	I1124 13:59:09.453124  549693 cri.go:89] found id: ""
	I1124 13:59:09.453132  549693 logs.go:282] 2 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338]
	I1124 13:59:09.453179  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:09.457606  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:09.461391  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:09.461449  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:09.493173  549693 cri.go:89] found id: ""
	I1124 13:59:09.493201  549693 logs.go:282] 0 containers: []
	W1124 13:59:09.493211  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:09.493219  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:09.493282  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:09.525481  549693 cri.go:89] found id: ""
	I1124 13:59:09.525505  549693 logs.go:282] 0 containers: []
	W1124 13:59:09.525515  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:09.525523  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:09.525582  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:09.560223  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:09.560243  549693 cri.go:89] found id: ""
	I1124 13:59:09.560254  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:09.560309  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:09.564532  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:09.564597  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:09.594875  549693 cri.go:89] found id: ""
	I1124 13:59:09.594916  549693 logs.go:282] 0 containers: []
	W1124 13:59:09.594933  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:09.594941  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:09.594991  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:09.623640  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:09.623659  549693 cri.go:89] found id: "cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:59:09.623665  549693 cri.go:89] found id: ""
	I1124 13:59:09.623675  549693 logs.go:282] 2 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573]
	I1124 13:59:09.623734  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:09.627585  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:09.631472  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:09.631524  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:09.658753  549693 cri.go:89] found id: ""
	I1124 13:59:09.658778  549693 logs.go:282] 0 containers: []
	W1124 13:59:09.658788  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:09.658796  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:09.658839  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:09.689345  549693 cri.go:89] found id: ""
	I1124 13:59:09.689369  549693 logs.go:282] 0 containers: []
	W1124 13:59:09.689378  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:09.689401  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:09.689416  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:09.723790  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:09.723817  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:09.843550  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:09.843592  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:09.881468  549693 logs.go:123] Gathering logs for kube-apiserver [dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338] ...
	I1124 13:59:09.881494  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	W1124 13:59:09.912613  549693 logs.go:130] failed kube-apiserver [dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338]: command: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338" /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338": Process exited with status 1
	stdout:
	
	stderr:
	E1124 13:59:09.909763    6289 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338\": container with ID starting with dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338 not found: ID does not exist" containerID="dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	time="2025-11-24T13:59:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338\": container with ID starting with dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338 not found: ID does not exist"
	 output: 
	** stderr ** 
	E1124 13:59:09.909763    6289 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338\": container with ID starting with dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338 not found: ID does not exist" containerID="dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338"
	time="2025-11-24T13:59:09Z" level=fatal msg="rpc error: code = NotFound desc = could not find container \"dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338\": container with ID starting with dca3aad182021c9dbb6ac9a8642a506c992f98558638d4a723f3b7f6ed69d338 not found: ID does not exist"
	
	** /stderr **
	I1124 13:59:09.912634  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:09.912648  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:09.969691  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:09.969717  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:09.986469  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:09.986494  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:10.047026  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:10.047050  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:10.047066  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:10.075455  549693 logs.go:123] Gathering logs for kube-controller-manager [cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573] ...
	I1124 13:59:10.075483  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 cfa845965624e690fb5a0616b9068c5cb2f113ce60ef66b8febfc426ec4d7573"
	I1124 13:59:10.109849  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:10.109885  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:07.894908  592938 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 13:59:07.996877  592938 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 13:59:08.135177  592938 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 13:59:08.135363  592938 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [embed-certs-456660 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 13:59:08.328646  592938 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 13:59:08.328847  592938 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [embed-certs-456660 localhost] and IPs [192.168.85.2 127.0.0.1 ::1]
	I1124 13:59:08.548984  592938 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 13:59:08.733641  592938 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 13:59:09.174550  592938 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 13:59:09.174668  592938 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 13:59:09.541383  592938 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 13:59:09.841530  592938 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 13:59:10.085654  592938 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 13:59:10.430117  592938 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 13:59:10.554647  592938 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 13:59:10.555337  592938 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 13:59:10.558781  592938 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	
	
	==> CRI-O <==
	Nov 24 13:58:50 no-preload-495729 crio[572]: time="2025-11-24T13:58:50.548544361Z" level=info msg="Created container 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=e4a5568b-a4eb-4dbc-adbe-cf0b94e548a9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:50 no-preload-495729 crio[572]: time="2025-11-24T13:58:50.549152149Z" level=info msg="Starting container: 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995" id=189d45d5-36ed-4a98-bc5d-b063e41eb230 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:50 no-preload-495729 crio[572]: time="2025-11-24T13:58:50.551339759Z" level=info msg="Started container" PID=1665 containerID=0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper id=189d45d5-36ed-4a98-bc5d-b063e41eb230 name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd48bd6794798f0d880f6d7affc69ed03e9f5aa7a993b803f374e7dee55da602
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.377958963Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=2fb0f4df-023e-41c5-9750-31b3be2691e3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.395815191Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=7e8f5e9d-7641-4c18-9c31-1e0bf134680e name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.406286047Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=bcdc6559-290b-490e-9b1a-35b4fba6d741 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.406537747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.496596117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.497392329Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.579959775Z" level=info msg="Created container cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=bcdc6559-290b-490e-9b1a-35b4fba6d741 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.580910215Z" level=info msg="Starting container: cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6" id=7aaeea88-dc32-4678-b3b3-2e38adc5dd2b name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:51 no-preload-495729 crio[572]: time="2025-11-24T13:58:51.583614633Z" level=info msg="Started container" PID=1682 containerID=cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper id=7aaeea88-dc32-4678-b3b3-2e38adc5dd2b name=/runtime.v1.RuntimeService/StartContainer sandboxID=bd48bd6794798f0d880f6d7affc69ed03e9f5aa7a993b803f374e7dee55da602
	Nov 24 13:58:52 no-preload-495729 crio[572]: time="2025-11-24T13:58:52.384450187Z" level=info msg="Removing container: 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995" id=7a2cd042-74fd-4c8b-b1ac-74bb149a7c32 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:58:52 no-preload-495729 crio[572]: time="2025-11-24T13:58:52.394807853Z" level=info msg="Removed container 0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh/dashboard-metrics-scraper" id=7a2cd042-74fd-4c8b-b1ac-74bb149a7c32 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.869026446Z" level=info msg="Pulled image: docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029" id=7bf824dd-ebf6-4dd9-9707-bc49ee3e9b35 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.869611309Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=dbadbb5f-2183-480a-98dd-149038828dd8 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.871027458Z" level=info msg="Checking image status: docker.io/kubernetesui/dashboard:v2.7.0@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93" id=79ecdb30-7dc7-49b7-b62d-da36c156671a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.874234504Z" level=info msg="Creating container: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk/kubernetes-dashboard" id=f6a759e5-b374-48e6-91f2-d85d3da8331b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.874369531Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.878181099Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.878428845Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/1c3e734a06e43d7346148d9a97a2350971ee3115178d587da71c3e4dd74a510c/merged/etc/group: no such file or directory"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.878857552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.908068103Z" level=info msg="Created container 9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk/kubernetes-dashboard" id=f6a759e5-b374-48e6-91f2-d85d3da8331b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.908568691Z" level=info msg="Starting container: 9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2" id=bfa01e1f-4a00-4c97-a80b-ea20210a5c02 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:53 no-preload-495729 crio[572]: time="2025-11-24T13:58:53.910330513Z" level=info msg="Started container" PID=1743 containerID=9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2 description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk/kubernetes-dashboard id=bfa01e1f-4a00-4c97-a80b-ea20210a5c02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=98a77350050120ef00624b1cd984b628ee97e0dff383d0a339c372426984c1b1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	9a185bbd7db92       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   17 seconds ago      Running             kubernetes-dashboard        0                   98a7735005012       kubernetes-dashboard-855c9754f9-xvfgk        kubernetes-dashboard
	cf1720169940a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   1                   bd48bd6794798       dashboard-metrics-scraper-6ffb444bf9-r86vh   kubernetes-dashboard
	ab498d86243e8       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           30 seconds ago      Running             busybox                     1                   578a6b4c26cfd       busybox                                      default
	f8ae90386bcd1       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           30 seconds ago      Running             coredns                     0                   01dea1db56b46       coredns-66bc5c9577-b7t2v                     kube-system
	2a07af39c8cfa       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           33 seconds ago      Running             kube-proxy                  0                   e6c97536c6b64       kube-proxy-mxzvp                             kube-system
	0e39be3eda514       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           33 seconds ago      Exited              storage-provisioner         0                   c403021447e90       storage-provisioner                          kube-system
	903c84e46ccde       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           33 seconds ago      Running             kindnet-cni                 0                   65033d721a272       kindnet-mtrx6                                kube-system
	c5414bcf8f37e       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           36 seconds ago      Running             kube-apiserver              0                   f9494f0e364c9       kube-apiserver-no-preload-495729             kube-system
	62030928e1d17       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           36 seconds ago      Running             kube-scheduler              0                   9fd49b2cadb2a       kube-scheduler-no-preload-495729             kube-system
	fe3faa18594ef       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           36 seconds ago      Running             kube-controller-manager     0                   2a662a0321258       kube-controller-manager-no-preload-495729    kube-system
	157c65960e2b0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           36 seconds ago      Running             etcd                        0                   180cc6cf78bac       etcd-no-preload-495729                       kube-system
	
	
	==> coredns [f8ae90386bcd1263701dc3942191b95109eee84dbc38e65217bebf360af1be31] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49022 - 26097 "HINFO IN 7324422859843716833.8518420187868997806. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.094296095s
	
	
	==> describe nodes <==
	Name:               no-preload-495729
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-495729
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=no-preload-495729
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_57_40_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:57:36 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-495729
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:58:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:57:35 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:58:46 +0000   Mon, 24 Nov 2025 13:58:46 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    no-preload-495729
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                b9ead28d-5d73-474f-b9bc-4fe7bfd306f8
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         71s
	  kube-system                 coredns-66bc5c9577-b7t2v                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     86s
	  kube-system                 etcd-no-preload-495729                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         92s
	  kube-system                 kindnet-mtrx6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      87s
	  kube-system                 kube-apiserver-no-preload-495729              250m (3%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-controller-manager-no-preload-495729     200m (2%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 kube-proxy-mxzvp                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 kube-scheduler-no-preload-495729              100m (1%)     0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         86s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-r86vh    0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-xvfgk         0 (0%)        0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 85s                kube-proxy       
	  Normal  Starting                 33s                kube-proxy       
	  Normal  Starting                 97s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  97s (x8 over 97s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    97s (x8 over 97s)  kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     97s (x8 over 97s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    92s                kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  92s                kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     92s                kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  Starting                 92s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           88s                node-controller  Node no-preload-495729 event: Registered Node no-preload-495729 in Controller
	  Normal  NodeReady                73s                kubelet          Node no-preload-495729 status is now: NodeReady
	  Normal  Starting                 37s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  37s (x8 over 37s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    37s (x8 over 37s)  kubelet          Node no-preload-495729 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     37s (x8 over 37s)  kubelet          Node no-preload-495729 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           31s                node-controller  Node no-preload-495729 event: Registered Node no-preload-495729 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [157c65960e2b04d4d57edcc130777d480830cee904e929103df8bc888e89eb35] <==
	{"level":"warn","ts":"2025-11-24T13:58:36.220287Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.225844Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48768","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.231881Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.238383Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48810","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.245395Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.251227Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.257186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.263178Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.269617Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48904","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.276224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48922","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.282242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.288095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.294185Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.300504Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:48998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.306285Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49018","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.323319Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49038","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.328826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.335403Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:58:36.381684Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49088","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:58:48.891607Z","caller":"traceutil/trace.go:172","msg":"trace[1029022935] linearizableReadLoop","detail":"{readStateIndex:653; appliedIndex:653; }","duration":"113.058664ms","start":"2025-11-24T13:58:48.778523Z","end":"2025-11-24T13:58:48.891582Z","steps":["trace[1029022935] 'read index received'  (duration: 113.050438ms)","trace[1029022935] 'applied index is now lower than readState.Index'  (duration: 6.916µs)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:58:48.891750Z","caller":"traceutil/trace.go:172","msg":"trace[1104860547] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"115.55741ms","start":"2025-11-24T13:58:48.776180Z","end":"2025-11-24T13:58:48.891738Z","steps":["trace[1104860547] 'process raft request'  (duration: 115.437472ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:58:48.891776Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.228471ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-495729\" limit:1 ","response":"range_response_count:1 size:4853"}
	{"level":"info","ts":"2025-11-24T13:58:48.891828Z","caller":"traceutil/trace.go:172","msg":"trace[1769369819] range","detail":"{range_begin:/registry/minions/no-preload-495729; range_end:; response_count:1; response_revision:623; }","duration":"113.301914ms","start":"2025-11-24T13:58:48.778518Z","end":"2025-11-24T13:58:48.891820Z","steps":["trace[1769369819] 'agreement among raft nodes before linearized reading'  (duration: 113.153705ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:58:49.108319Z","caller":"traceutil/trace.go:172","msg":"trace[1429464594] transaction","detail":"{read_only:false; response_revision:628; number_of_response:1; }","duration":"159.060502ms","start":"2025-11-24T13:58:48.949243Z","end":"2025-11-24T13:58:49.108304Z","steps":["trace[1429464594] 'process raft request'  (duration: 130.273441ms)","trace[1429464594] 'compare'  (duration: 28.698889ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:58:49.306190Z","caller":"traceutil/trace.go:172","msg":"trace[1802569450] transaction","detail":"{read_only:false; response_revision:629; number_of_response:1; }","duration":"110.432941ms","start":"2025-11-24T13:58:49.195727Z","end":"2025-11-24T13:58:49.306160Z","steps":["trace[1802569450] 'process raft request'  (duration: 100.438185ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:59:11 up  2:41,  0 user,  load average: 1.66, 2.76, 1.93
	Linux no-preload-495729 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [903c84e46ccdef1d6896c69c09ebe9a3407439b6081697d0a3f3cb40af80da77] <==
	I1124 13:58:37.937855       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:58:37.938130       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:58:37.938277       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:58:37.938301       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:58:37.938342       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:58:38Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:58:38.137679       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:58:38.137714       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:58:38.137727       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:58:38.137860       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:58:38.437937       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:58:38.437966       1 metrics.go:72] Registering metrics
	I1124 13:58:38.438119       1 controller.go:711] "Syncing nftables rules"
	I1124 13:58:48.049515       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:58:48.049580       1 main.go:301] handling current node
	I1124 13:58:58.052855       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:58:58.052905       1 main.go:301] handling current node
	I1124 13:59:08.058966       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:59:08.058990       1 main.go:301] handling current node
	
	
	==> kube-apiserver [c5414bcf8f37eefc1509b423927ff2e9afae879fa089646bfe236c7e8838f941] <==
	I1124 13:58:36.832814       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 13:58:36.832975       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 13:58:36.833078       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 13:58:36.833119       1 aggregator.go:171] initial CRD sync complete...
	I1124 13:58:36.833135       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 13:58:36.833191       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:58:36.833251       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:58:36.833442       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 13:58:36.833490       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:58:36.833563       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 13:58:36.833562       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 13:58:36.839986       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:58:36.840578       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 13:58:36.868053       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:58:37.055787       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:58:37.079561       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:58:37.096771       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:58:37.102256       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:58:37.111813       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:58:37.138441       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.100.48.248"}
	I1124 13:58:37.147429       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.106.230.82"}
	I1124 13:58:37.735343       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:58:40.516532       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:58:40.567137       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:58:40.766729       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [fe3faa18594eff20855f0be9dc75861d22f7a99057fb8fd3c24ff01eaf028868] <==
	I1124 13:58:40.146485       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:58:40.163923       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 13:58:40.163945       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:58:40.164005       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:58:40.164021       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:58:40.164026       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:58:40.164157       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:58:40.164190       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:58:40.164231       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:58:40.164252       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:58:40.164343       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:58:40.164430       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:58:40.164431       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:58:40.165105       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:58:40.166906       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:58:40.167461       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:58:40.167779       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:58:40.168956       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:58:40.169052       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:58:40.172305       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 13:58:40.176500       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:58:40.177691       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:58:40.179978       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1124 13:58:40.188251       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:58:50.098051       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [2a07af39c8cfa9f54acb40356ea5d9c755cef6b9908261e0552df8388a4e4b5b] <==
	I1124 13:58:37.693456       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:58:37.771665       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:58:37.872617       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:58:37.872653       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:58:37.872753       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:58:37.893798       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:58:37.893855       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:58:37.899769       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:58:37.900585       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:58:37.900610       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:37.903432       1 config.go:200] "Starting service config controller"
	I1124 13:58:37.903463       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:58:37.903801       1 config.go:309] "Starting node config controller"
	I1124 13:58:37.903812       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:58:37.903818       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:58:37.903929       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:58:37.903943       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:58:37.903964       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:58:37.903969       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:58:38.004032       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:58:38.004056       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:58:38.004083       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [62030928e1d177cbf0ad8f12916eb88c433896f670073ef28f5784598ad3be2b] <==
	I1124 13:58:35.364600       1 serving.go:386] Generated self-signed cert in-memory
	W1124 13:58:36.762177       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 13:58:36.762205       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 13:58:36.762231       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 13:58:36.762241       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 13:58:36.802077       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 13:58:36.802233       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:36.805276       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:58:36.805375       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:58:36.806337       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 13:58:36.806438       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 13:58:36.906009       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.068998     722 projected.go:196] Error preparing data for projected volume kube-api-access-8mkhf for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.069070     722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf podName:bf3a1272-92ff-45db-ba2f-8e360dd19c97 nodeName:}" failed. No retries permitted until 2025-11-24 13:58:39.069053101 +0000 UTC m=+4.832049123 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mkhf" (UniqueName: "kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf") pod "busybox" (UID: "bf3a1272-92ff-45db-ba2f-8e360dd19c97") : object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.973286     722 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Nov 24 13:58:38 no-preload-495729 kubelet[722]: E1124 13:58:38.973362     722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cfd3642f-4fab-4d58-ac21-5c59c0820cb6-config-volume podName:cfd3642f-4fab-4d58-ac21-5c59c0820cb6 nodeName:}" failed. No retries permitted until 2025-11-24 13:58:40.973349133 +0000 UTC m=+6.736345142 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/cfd3642f-4fab-4d58-ac21-5c59c0820cb6-config-volume") pod "coredns-66bc5c9577-b7t2v" (UID: "cfd3642f-4fab-4d58-ac21-5c59c0820cb6") : object "kube-system"/"coredns" not registered
	Nov 24 13:58:39 no-preload-495729 kubelet[722]: E1124 13:58:39.073692     722 projected.go:291] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:39 no-preload-495729 kubelet[722]: E1124 13:58:39.073721     722 projected.go:196] Error preparing data for projected volume kube-api-access-8mkhf for pod default/busybox: object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:39 no-preload-495729 kubelet[722]: E1124 13:58:39.073807     722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf podName:bf3a1272-92ff-45db-ba2f-8e360dd19c97 nodeName:}" failed. No retries permitted until 2025-11-24 13:58:41.073785275 +0000 UTC m=+6.836781305 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-8mkhf" (UniqueName: "kubernetes.io/projected/bf3a1272-92ff-45db-ba2f-8e360dd19c97-kube-api-access-8mkhf") pod "busybox" (UID: "bf3a1272-92ff-45db-ba2f-8e360dd19c97") : object "default"/"kube-root-ca.crt" not registered
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.120859     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96bgt\" (UniqueName: \"kubernetes.io/projected/885596b0-37d2-4c9a-9577-ac17e3e35b79-kube-api-access-96bgt\") pod \"kubernetes-dashboard-855c9754f9-xvfgk\" (UID: \"885596b0-37d2-4c9a-9577-ac17e3e35b79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk"
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.120924     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84tg5\" (UniqueName: \"kubernetes.io/projected/4d95054b-09c8-44da-8982-1c48abf3f219-kube-api-access-84tg5\") pod \"dashboard-metrics-scraper-6ffb444bf9-r86vh\" (UID: \"4d95054b-09c8-44da-8982-1c48abf3f219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh"
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.120969     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4d95054b-09c8-44da-8982-1c48abf3f219-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-r86vh\" (UID: \"4d95054b-09c8-44da-8982-1c48abf3f219\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh"
	Nov 24 13:58:47 no-preload-495729 kubelet[722]: I1124 13:58:47.121002     722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/885596b0-37d2-4c9a-9577-ac17e3e35b79-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-xvfgk\" (UID: \"885596b0-37d2-4c9a-9577-ac17e3e35b79\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk"
	Nov 24 13:58:48 no-preload-495729 kubelet[722]: I1124 13:58:48.762322     722 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 13:58:51 no-preload-495729 kubelet[722]: I1124 13:58:51.377361     722 scope.go:117] "RemoveContainer" containerID="0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995"
	Nov 24 13:58:52 no-preload-495729 kubelet[722]: I1124 13:58:52.383015     722 scope.go:117] "RemoveContainer" containerID="0a7adf2b4746e1fb9e8afcfaba5e975500e5db2d158856c135eff6f024dd7995"
	Nov 24 13:58:52 no-preload-495729 kubelet[722]: I1124 13:58:52.383120     722 scope.go:117] "RemoveContainer" containerID="cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	Nov 24 13:58:52 no-preload-495729 kubelet[722]: E1124 13:58:52.383284     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r86vh_kubernetes-dashboard(4d95054b-09c8-44da-8982-1c48abf3f219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh" podUID="4d95054b-09c8-44da-8982-1c48abf3f219"
	Nov 24 13:58:53 no-preload-495729 kubelet[722]: I1124 13:58:53.387639     722 scope.go:117] "RemoveContainer" containerID="cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	Nov 24 13:58:53 no-preload-495729 kubelet[722]: E1124 13:58:53.387816     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r86vh_kubernetes-dashboard(4d95054b-09c8-44da-8982-1c48abf3f219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh" podUID="4d95054b-09c8-44da-8982-1c48abf3f219"
	Nov 24 13:58:54 no-preload-495729 kubelet[722]: I1124 13:58:54.403486     722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-xvfgk" podStartSLOduration=7.819098572 podStartE2EDuration="14.40346653s" podCreationTimestamp="2025-11-24 13:58:40 +0000 UTC" firstStartedPulling="2025-11-24 13:58:47.286157499 +0000 UTC m=+13.049153521" lastFinishedPulling="2025-11-24 13:58:53.870525469 +0000 UTC m=+19.633521479" observedRunningTime="2025-11-24 13:58:54.40315408 +0000 UTC m=+20.166150113" watchObservedRunningTime="2025-11-24 13:58:54.40346653 +0000 UTC m=+20.166462561"
	Nov 24 13:58:57 no-preload-495729 kubelet[722]: I1124 13:58:57.257485     722 scope.go:117] "RemoveContainer" containerID="cf1720169940a81c8764b2cac5355d763db5c89ffe09b53993af9b0807b75ad6"
	Nov 24 13:58:57 no-preload-495729 kubelet[722]: E1124 13:58:57.257647     722 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-r86vh_kubernetes-dashboard(4d95054b-09c8-44da-8982-1c48abf3f219)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-r86vh" podUID="4d95054b-09c8-44da-8982-1c48abf3f219"
	Nov 24 13:59:06 no-preload-495729 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 13:59:06 no-preload-495729 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 13:59:06 no-preload-495729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 13:59:06 no-preload-495729 systemd[1]: kubelet.service: Consumed 1.134s CPU time.
	
	
	==> kubernetes-dashboard [9a185bbd7db9231364062faa7b8bf2b09a8815ef19ba81adbcd51a569f653ce2] <==
	2025/11/24 13:58:53 Starting overwatch
	2025/11/24 13:58:53 Using namespace: kubernetes-dashboard
	2025/11/24 13:58:53 Using in-cluster config to connect to apiserver
	2025/11/24 13:58:53 Using secret token for csrf signing
	2025/11/24 13:58:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 13:58:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 13:58:53 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 13:58:53 Generating JWE encryption key
	2025/11/24 13:58:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 13:58:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 13:58:54 Initializing JWE encryption key from synchronized object
	2025/11/24 13:58:54 Creating in-cluster Sidecar client
	2025/11/24 13:58:54 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 13:58:54 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [0e39be3eda51444362a2e29b8512e0cd1c604619e642090db9c1ba4832ceac50] <==
	I1124 13:58:37.666131       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 13:59:07.667947       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-495729 -n no-preload-495729
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-495729 -n no-preload-495729: exit status 2 (362.409137ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-495729 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (6.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-551674 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-551674 --alsologtostderr -v=1: exit status 80 (2.481583375s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-551674 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:59:22.723219  600140 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:22.723353  600140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:22.723363  600140 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:22.723369  600140 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:22.723564  600140 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:59:22.723827  600140 out.go:368] Setting JSON to false
	I1124 13:59:22.723854  600140 mustload.go:66] Loading cluster: old-k8s-version-551674
	I1124 13:59:22.724228  600140 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:59:22.724633  600140 cli_runner.go:164] Run: docker container inspect old-k8s-version-551674 --format={{.State.Status}}
	I1124 13:59:22.743301  600140 host.go:66] Checking if "old-k8s-version-551674" exists ...
	I1124 13:59:22.743642  600140 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:22.798283  600140 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:81 OomKillDisable:false NGoroutines:88 SystemTime:2025-11-24 13:59:22.7890382 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:22.799117  600140 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-551674 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 13:59:22.800825  600140 out.go:179] * Pausing node old-k8s-version-551674 ... 
	I1124 13:59:22.801833  600140 host.go:66] Checking if "old-k8s-version-551674" exists ...
	I1124 13:59:22.802163  600140 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:22.802219  600140 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-551674
	I1124 13:59:22.818993  600140 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33438 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/old-k8s-version-551674/id_rsa Username:docker}
	I1124 13:59:22.919353  600140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:22.948042  600140 pause.go:52] kubelet running: true
	I1124 13:59:22.948122  600140 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:23.107308  600140 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:23.107408  600140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:23.169605  600140 cri.go:89] found id: "d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0"
	I1124 13:59:23.169625  600140 cri.go:89] found id: "5945066c1bf4374e52728a69c48a556dbb99eb23b787887fcdc19f79b27dbdf1"
	I1124 13:59:23.169629  600140 cri.go:89] found id: "79cc18514458ab77dd20c134c4befb59891d55b0c82fe66dfc6a6a3676870f3c"
	I1124 13:59:23.169632  600140 cri.go:89] found id: "6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c"
	I1124 13:59:23.169641  600140 cri.go:89] found id: "20e66ad022041ccc62db2d900a48a8abc2e3d419daf8b5de2ef5544962096bfd"
	I1124 13:59:23.169644  600140 cri.go:89] found id: "9b964e539060bf1c1a1da0a82bb08dc64769689e2441ea480573fa9ab7f2a79c"
	I1124 13:59:23.169647  600140 cri.go:89] found id: "e8b2cdae759a78a53d4bb761e54084097e234b3a4625fcace0612d86af8ce8e7"
	I1124 13:59:23.169650  600140 cri.go:89] found id: "53b732b3e825d4856ae0fadf78757166b2bc4c473356786dbb86c08c262503b3"
	I1124 13:59:23.169653  600140 cri.go:89] found id: "5609559ca15853175b8f8a04131a0ec91f834eb1788f2cb29ae4934ff72c93a0"
	I1124 13:59:23.169659  600140 cri.go:89] found id: "058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	I1124 13:59:23.169661  600140 cri.go:89] found id: "4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6"
	I1124 13:59:23.169664  600140 cri.go:89] found id: ""
	I1124 13:59:23.169700  600140 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:23.181557  600140 retry.go:31] will retry after 356.059946ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:23Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:59:23.537965  600140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:23.550717  600140 pause.go:52] kubelet running: false
	I1124 13:59:23.550764  600140 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:23.689625  600140 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:23.689715  600140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:23.751388  600140 cri.go:89] found id: "d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0"
	I1124 13:59:23.751411  600140 cri.go:89] found id: "5945066c1bf4374e52728a69c48a556dbb99eb23b787887fcdc19f79b27dbdf1"
	I1124 13:59:23.751415  600140 cri.go:89] found id: "79cc18514458ab77dd20c134c4befb59891d55b0c82fe66dfc6a6a3676870f3c"
	I1124 13:59:23.751418  600140 cri.go:89] found id: "6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c"
	I1124 13:59:23.751421  600140 cri.go:89] found id: "20e66ad022041ccc62db2d900a48a8abc2e3d419daf8b5de2ef5544962096bfd"
	I1124 13:59:23.751425  600140 cri.go:89] found id: "9b964e539060bf1c1a1da0a82bb08dc64769689e2441ea480573fa9ab7f2a79c"
	I1124 13:59:23.751428  600140 cri.go:89] found id: "e8b2cdae759a78a53d4bb761e54084097e234b3a4625fcace0612d86af8ce8e7"
	I1124 13:59:23.751431  600140 cri.go:89] found id: "53b732b3e825d4856ae0fadf78757166b2bc4c473356786dbb86c08c262503b3"
	I1124 13:59:23.751433  600140 cri.go:89] found id: "5609559ca15853175b8f8a04131a0ec91f834eb1788f2cb29ae4934ff72c93a0"
	I1124 13:59:23.751450  600140 cri.go:89] found id: "058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	I1124 13:59:23.751453  600140 cri.go:89] found id: "4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6"
	I1124 13:59:23.751456  600140 cri.go:89] found id: ""
	I1124 13:59:23.751494  600140 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:23.762978  600140 retry.go:31] will retry after 202.121957ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:23Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:59:23.965345  600140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:23.978024  600140 pause.go:52] kubelet running: false
	I1124 13:59:23.978084  600140 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:24.120095  600140 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:24.120166  600140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:24.182966  600140 cri.go:89] found id: "d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0"
	I1124 13:59:24.183002  600140 cri.go:89] found id: "5945066c1bf4374e52728a69c48a556dbb99eb23b787887fcdc19f79b27dbdf1"
	I1124 13:59:24.183008  600140 cri.go:89] found id: "79cc18514458ab77dd20c134c4befb59891d55b0c82fe66dfc6a6a3676870f3c"
	I1124 13:59:24.183011  600140 cri.go:89] found id: "6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c"
	I1124 13:59:24.183014  600140 cri.go:89] found id: "20e66ad022041ccc62db2d900a48a8abc2e3d419daf8b5de2ef5544962096bfd"
	I1124 13:59:24.183022  600140 cri.go:89] found id: "9b964e539060bf1c1a1da0a82bb08dc64769689e2441ea480573fa9ab7f2a79c"
	I1124 13:59:24.183025  600140 cri.go:89] found id: "e8b2cdae759a78a53d4bb761e54084097e234b3a4625fcace0612d86af8ce8e7"
	I1124 13:59:24.183028  600140 cri.go:89] found id: "53b732b3e825d4856ae0fadf78757166b2bc4c473356786dbb86c08c262503b3"
	I1124 13:59:24.183031  600140 cri.go:89] found id: "5609559ca15853175b8f8a04131a0ec91f834eb1788f2cb29ae4934ff72c93a0"
	I1124 13:59:24.183037  600140 cri.go:89] found id: "058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	I1124 13:59:24.183040  600140 cri.go:89] found id: "4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6"
	I1124 13:59:24.183043  600140 cri.go:89] found id: ""
	I1124 13:59:24.183079  600140 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:24.194566  600140 retry.go:31] will retry after 694.87395ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:24Z" level=error msg="open /run/runc: no such file or directory"
	I1124 13:59:24.890064  600140 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:59:24.903705  600140 pause.go:52] kubelet running: false
	I1124 13:59:24.903771  600140 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 13:59:25.054144  600140 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 13:59:25.054243  600140 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 13:59:25.121132  600140 cri.go:89] found id: "d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0"
	I1124 13:59:25.121158  600140 cri.go:89] found id: "5945066c1bf4374e52728a69c48a556dbb99eb23b787887fcdc19f79b27dbdf1"
	I1124 13:59:25.121206  600140 cri.go:89] found id: "79cc18514458ab77dd20c134c4befb59891d55b0c82fe66dfc6a6a3676870f3c"
	I1124 13:59:25.121218  600140 cri.go:89] found id: "6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c"
	I1124 13:59:25.121226  600140 cri.go:89] found id: "20e66ad022041ccc62db2d900a48a8abc2e3d419daf8b5de2ef5544962096bfd"
	I1124 13:59:25.121232  600140 cri.go:89] found id: "9b964e539060bf1c1a1da0a82bb08dc64769689e2441ea480573fa9ab7f2a79c"
	I1124 13:59:25.121237  600140 cri.go:89] found id: "e8b2cdae759a78a53d4bb761e54084097e234b3a4625fcace0612d86af8ce8e7"
	I1124 13:59:25.121242  600140 cri.go:89] found id: "53b732b3e825d4856ae0fadf78757166b2bc4c473356786dbb86c08c262503b3"
	I1124 13:59:25.121247  600140 cri.go:89] found id: "5609559ca15853175b8f8a04131a0ec91f834eb1788f2cb29ae4934ff72c93a0"
	I1124 13:59:25.121255  600140 cri.go:89] found id: "058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	I1124 13:59:25.121260  600140 cri.go:89] found id: "4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6"
	I1124 13:59:25.121265  600140 cri.go:89] found id: ""
	I1124 13:59:25.121325  600140 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 13:59:25.135704  600140 out.go:203] 
	W1124 13:59:25.136835  600140 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:59:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 13:59:25.136859  600140 out.go:285] * 
	* 
	W1124 13:59:25.141900  600140 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 13:59:25.143060  600140 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-551674 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-551674
helpers_test.go:243: (dbg) docker inspect old-k8s-version-551674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207",
	        "Created": "2025-11-24T13:57:09.159057998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:58:19.886142631Z",
	            "FinishedAt": "2025-11-24T13:58:19.027800386Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/hostname",
	        "HostsPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/hosts",
	        "LogPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207-json.log",
	        "Name": "/old-k8s-version-551674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-551674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-551674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207",
	                "LowerDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-551674",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-551674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-551674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-551674",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-551674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5bd227cc8b21234fb34a28c0df04506457095635e0abd24c0add46e4f453b14c",
	            "SandboxKey": "/var/run/docker/netns/5bd227cc8b21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-551674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "584350b1ae0057925436f12a069654d8b9b77ec40acdead63d77442ee50e6e01",
	                    "EndpointID": "3be3704f81ab6243ef9958900d279082527904b41a2a8e67bd652468d4842eba",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e2:3b:fe:b9:f6:d8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-551674",
	                        "cffc3242ebb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674: exit status 2 (332.96818ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-551674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-551674 logs -n 25: (1.131713226s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-165759 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo crio config                                                                                                                                                                                                             │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ delete  │ -p cilium-165759                                                                                                                                                                                                                              │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:57 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p old-k8s-version-551674 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p no-preload-495729 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-551674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ addons  │ enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:59:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:59:15.358537  597884 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:15.358630  597884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:15.358640  597884 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:15.358644  597884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:15.358869  597884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:59:15.359384  597884 out.go:368] Setting JSON to false
	I1124 13:59:15.360564  597884 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9702,"bootTime":1763983053,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:59:15.360618  597884 start.go:143] virtualization: kvm guest
	I1124 13:59:15.362234  597884 out.go:179] * [default-k8s-diff-port-098307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:59:15.363455  597884 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:59:15.363497  597884 notify.go:221] Checking for updates...
	I1124 13:59:15.365834  597884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:59:15.367034  597884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:59:15.368029  597884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:59:15.370400  597884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:59:15.371498  597884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:59:15.372856  597884 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:15.372965  597884 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:15.373045  597884 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:59:15.373124  597884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:59:15.397564  597884 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:59:15.397713  597884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:15.451882  597884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 13:59:15.441972191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:15.452035  597884 docker.go:319] overlay module found
	I1124 13:59:15.454008  597884 out.go:179] * Using the docker driver based on user configuration
	I1124 13:59:15.454990  597884 start.go:309] selected driver: docker
	I1124 13:59:15.455007  597884 start.go:927] validating driver "docker" against <nil>
	I1124 13:59:15.455021  597884 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:59:15.455628  597884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:15.518269  597884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 13:59:15.507535267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:15.518485  597884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:59:15.518729  597884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:59:15.520052  597884 out.go:179] * Using Docker driver with root privileges
	I1124 13:59:15.521137  597884 cni.go:84] Creating CNI manager for ""
	I1124 13:59:15.521222  597884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:15.521239  597884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:59:15.521319  597884 start.go:353] cluster config:
	{Name:default-k8s-diff-port-098307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-098307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:15.522677  597884 out.go:179] * Starting "default-k8s-diff-port-098307" primary control-plane node in "default-k8s-diff-port-098307" cluster
	I1124 13:59:15.523763  597884 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:59:15.524951  597884 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:59:15.525935  597884 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:15.525974  597884 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:59:15.525997  597884 cache.go:65] Caching tarball of preloaded images
	I1124 13:59:15.526024  597884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:59:15.526131  597884 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:59:15.526151  597884 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:59:15.526285  597884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/default-k8s-diff-port-098307/config.json ...
	I1124 13:59:15.526319  597884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/default-k8s-diff-port-098307/config.json: {Name:mk097457f2cf281c13f8600d6e4b69a245edfe55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:15.549153  597884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:59:15.549174  597884 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:59:15.549189  597884 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:59:15.549218  597884 start.go:360] acquireMachinesLock for default-k8s-diff-port-098307: {Name:mk2fcf2089e89fdca360031b39be958b7ce01e9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:59:15.549312  597884 start.go:364] duration metric: took 76.428µs to acquireMachinesLock for "default-k8s-diff-port-098307"
	I1124 13:59:15.549341  597884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-098307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-098307 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:59:15.549422  597884 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:59:15.238695  592938 out.go:252]   - Configuring RBAC rules ...
	I1124 13:59:15.238843  592938 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:59:15.242224  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:59:15.247326  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:59:15.249800  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:59:15.253046  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:59:15.255391  592938 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:59:15.602636  592938 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:59:16.020136  592938 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:59:16.602409  592938 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:59:16.603749  592938 kubeadm.go:319] 
	I1124 13:59:16.603864  592938 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:59:16.603874  592938 kubeadm.go:319] 
	I1124 13:59:16.603997  592938 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:59:16.604026  592938 kubeadm.go:319] 
	I1124 13:59:16.604077  592938 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:59:16.604145  592938 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:59:16.604208  592938 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:59:16.604216  592938 kubeadm.go:319] 
	I1124 13:59:16.604309  592938 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:59:16.604324  592938 kubeadm.go:319] 
	I1124 13:59:16.604383  592938 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:59:16.604392  592938 kubeadm.go:319] 
	I1124 13:59:16.604463  592938 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:59:16.604549  592938 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:59:16.604627  592938 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:59:16.604635  592938 kubeadm.go:319] 
	I1124 13:59:16.604739  592938 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:59:16.604827  592938 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:59:16.604835  592938 kubeadm.go:319] 
	I1124 13:59:16.604946  592938 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token i0ni2q.905unti7i418tiul \
	I1124 13:59:16.605072  592938 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:59:16.605099  592938 kubeadm.go:319] 	--control-plane 
	I1124 13:59:16.605106  592938 kubeadm.go:319] 
	I1124 13:59:16.605207  592938 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:59:16.605214  592938 kubeadm.go:319] 
	I1124 13:59:16.605321  592938 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token i0ni2q.905unti7i418tiul \
	I1124 13:59:16.605477  592938 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:59:16.608171  592938 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:59:16.608324  592938 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:59:16.608353  592938 cni.go:84] Creating CNI manager for ""
	I1124 13:59:16.608365  592938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:16.609947  592938 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:59:16.611036  592938 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:59:16.615311  592938 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:59:16.615331  592938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:59:16.630146  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:59:16.872457  592938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:59:16.872575  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:16.872584  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-456660 minikube.k8s.io/updated_at=2025_11_24T13_59_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-456660 minikube.k8s.io/primary=true
	I1124 13:59:16.885542  592938 ops.go:34] apiserver oom_adj: -16
	I1124 13:59:16.961862  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:17.462916  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:15.924277  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:15.924922  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:59:15.924993  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:15.925055  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:15.956392  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:15.956420  549693 cri.go:89] found id: ""
	I1124 13:59:15.956430  549693 logs.go:282] 1 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696]
	I1124 13:59:15.956486  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:15.962049  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:15.962111  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:16.011872  549693 cri.go:89] found id: ""
	I1124 13:59:16.012071  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.012099  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:16.012110  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:16.012183  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:16.046443  549693 cri.go:89] found id: ""
	I1124 13:59:16.046468  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.046479  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:16.046487  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:16.046540  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:16.073728  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:16.073754  549693 cri.go:89] found id: ""
	I1124 13:59:16.073764  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:16.073830  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:16.078391  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:16.078455  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:16.113265  549693 cri.go:89] found id: ""
	I1124 13:59:16.113290  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.113309  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:16.113317  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:16.113374  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:16.144874  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:16.144908  549693 cri.go:89] found id: ""
	I1124 13:59:16.144919  549693 logs.go:282] 1 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f]
	I1124 13:59:16.144984  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:16.149591  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:16.149639  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:16.179303  549693 cri.go:89] found id: ""
	I1124 13:59:16.179326  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.179337  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:16.179344  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:16.179394  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:16.209028  549693 cri.go:89] found id: ""
	I1124 13:59:16.209051  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.209061  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:16.209075  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:16.209090  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:16.313612  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:16.313650  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:16.334144  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:16.334177  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:16.398857  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:16.398882  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:16.398911  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:16.437749  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:16.437788  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:16.499958  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:16.499992  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:16.530576  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:16.530614  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:16.598120  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:16.598154  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:19.135804  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:19.136221  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:59:19.136287  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:19.136336  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:19.162240  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:19.162257  549693 cri.go:89] found id: ""
	I1124 13:59:19.162266  549693 logs.go:282] 1 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696]
	I1124 13:59:19.162308  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:19.166168  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:19.166233  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:19.191918  549693 cri.go:89] found id: ""
	I1124 13:59:19.191940  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.191948  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:19.191956  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:19.192005  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:19.215713  549693 cri.go:89] found id: ""
	I1124 13:59:19.215738  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.215747  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:19.215754  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:19.215796  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:19.241185  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:19.241209  549693 cri.go:89] found id: ""
	I1124 13:59:19.241220  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:19.241269  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:19.244960  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:19.245026  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:19.270257  549693 cri.go:89] found id: ""
	I1124 13:59:19.270276  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.270284  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:19.270289  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:19.270343  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:19.297085  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:19.297107  549693 cri.go:89] found id: ""
	I1124 13:59:19.297118  549693 logs.go:282] 1 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f]
	I1124 13:59:19.297172  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:19.301116  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:19.301179  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:19.327461  549693 cri.go:89] found id: ""
	I1124 13:59:19.327485  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.327492  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:19.327498  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:19.327540  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:19.352380  549693 cri.go:89] found id: ""
	I1124 13:59:19.352406  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.352417  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:19.352429  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:19.352444  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:19.406834  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:19.406856  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:19.406872  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:19.438687  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:19.438715  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:19.492096  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:19.492126  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:19.520104  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:19.520128  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:19.579431  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:19.579456  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:19.608235  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:19.608262  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:19.694673  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:19.694700  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:15.551582  597884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:59:15.551837  597884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-098307" (driver="docker")
	I1124 13:59:15.551869  597884 client.go:173] LocalClient.Create starting
	I1124 13:59:15.551971  597884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:59:15.552008  597884 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:15.552026  597884 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:15.552077  597884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:59:15.552096  597884 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:15.552105  597884 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:15.552404  597884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-098307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:59:15.570087  597884 cli_runner.go:211] docker network inspect default-k8s-diff-port-098307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:59:15.570167  597884 network_create.go:284] running [docker network inspect default-k8s-diff-port-098307] to gather additional debugging logs...
	I1124 13:59:15.570189  597884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-098307
	W1124 13:59:15.588277  597884 cli_runner.go:211] docker network inspect default-k8s-diff-port-098307 returned with exit code 1
	I1124 13:59:15.588314  597884 network_create.go:287] error running [docker network inspect default-k8s-diff-port-098307]: docker network inspect default-k8s-diff-port-098307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-098307 not found
	I1124 13:59:15.588326  597884 network_create.go:289] output of [docker network inspect default-k8s-diff-port-098307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-098307 not found
	
	** /stderr **
	I1124 13:59:15.588428  597884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:15.606303  597884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 13:59:15.607437  597884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 13:59:15.608134  597884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 13:59:15.608659  597884 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-283ea71f66a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:70:12:a2:88:dd} reservation:<nil>}
	I1124 13:59:15.609394  597884 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-95ddebcd3d89 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ba:53:32:2f:bb:ed} reservation:<nil>}
	I1124 13:59:15.610047  597884 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-584350b1ae00 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:72:e5:2a:e9:2d:0e} reservation:<nil>}
	I1124 13:59:15.610839  597884 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001efee90}
	I1124 13:59:15.610865  597884 network_create.go:124] attempt to create docker network default-k8s-diff-port-098307 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:59:15.610944  597884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 default-k8s-diff-port-098307
	I1124 13:59:15.663555  597884 network_create.go:108] docker network default-k8s-diff-port-098307 192.168.103.0/24 created
	I1124 13:59:15.663594  597884 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-098307" container
	I1124 13:59:15.663680  597884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:59:15.680995  597884 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-098307 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:59:15.698962  597884 oci.go:103] Successfully created a docker volume default-k8s-diff-port-098307
	I1124 13:59:15.699071  597884 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-098307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --entrypoint /usr/bin/test -v default-k8s-diff-port-098307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:59:16.123842  597884 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-098307
	I1124 13:59:16.123934  597884 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:16.123953  597884 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:59:16.124049  597884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-098307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:59:17.962606  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:18.462833  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:18.962953  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:19.462019  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:19.962247  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:20.462866  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:20.962711  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:21.462063  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:21.529726  592938 kubeadm.go:1114] duration metric: took 4.657208845s to wait for elevateKubeSystemPrivileges
	I1124 13:59:21.529759  592938 kubeadm.go:403] duration metric: took 14.354940344s to StartCluster
	I1124 13:59:21.529783  592938 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:21.529856  592938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:59:21.531521  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:21.531717  592938 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:59:21.531736  592938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:59:21.531777  592938 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:59:21.531877  592938 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-456660"
	I1124 13:59:21.531909  592938 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-456660"
	I1124 13:59:21.531941  592938 host.go:66] Checking if "embed-certs-456660" exists ...
	I1124 13:59:21.531969  592938 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:21.532019  592938 addons.go:70] Setting default-storageclass=true in profile "embed-certs-456660"
	I1124 13:59:21.532054  592938 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-456660"
	I1124 13:59:21.532416  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:21.532504  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:21.534292  592938 out.go:179] * Verifying Kubernetes components...
	I1124 13:59:21.535359  592938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:21.555669  592938 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:59:21.556613  592938 addons.go:239] Setting addon default-storageclass=true in "embed-certs-456660"
	I1124 13:59:21.556655  592938 host.go:66] Checking if "embed-certs-456660" exists ...
	I1124 13:59:21.557008  592938 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:59:21.557029  592938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:59:21.557102  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:21.557159  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:21.589530  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:21.590772  592938 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:59:21.590792  592938 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:59:21.590854  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:21.613209  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:21.629112  592938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:59:21.680659  592938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:21.722752  592938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:59:21.736612  592938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:59:21.818069  592938 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 13:59:21.819528  592938 node_ready.go:35] waiting up to 6m0s for node "embed-certs-456660" to be "Ready" ...
	I1124 13:59:22.000817  592938 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:59:22.002909  592938 addons.go:530] duration metric: took 471.127686ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:59:22.322280  592938 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-456660" context rescaled to 1 replicas
	I1124 13:59:22.212187  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:22.212659  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:59:22.212713  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:22.212759  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:22.241554  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:22.241575  549693 cri.go:89] found id: ""
	I1124 13:59:22.241585  549693 logs.go:282] 1 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696]
	I1124 13:59:22.241647  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:22.246552  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:22.246611  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:22.273701  549693 cri.go:89] found id: ""
	I1124 13:59:22.273729  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.273739  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:22.273747  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:22.273810  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:22.299054  549693 cri.go:89] found id: ""
	I1124 13:59:22.299084  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.299093  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:22.299098  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:22.299148  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:22.324688  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:22.324703  549693 cri.go:89] found id: ""
	I1124 13:59:22.324710  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:22.324749  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:22.328375  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:22.328439  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:22.354635  549693 cri.go:89] found id: ""
	I1124 13:59:22.354663  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.354673  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:22.354694  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:22.354759  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:22.383006  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:22.383029  549693 cri.go:89] found id: ""
	I1124 13:59:22.383039  549693 logs.go:282] 1 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f]
	I1124 13:59:22.383100  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:22.387706  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:22.387784  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:22.418231  549693 cri.go:89] found id: ""
	I1124 13:59:22.418260  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.418272  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:22.418280  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:22.418337  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:22.448747  549693 cri.go:89] found id: ""
	I1124 13:59:22.448781  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.448791  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:22.448803  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:22.448817  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:22.466954  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:22.466981  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:22.526252  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:22.526276  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:22.526291  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:22.557484  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:22.557511  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:22.611689  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:22.611719  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:22.640792  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:22.640815  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:22.701013  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:22.701045  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:22.732465  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:22.732490  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:20.586427  597884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-098307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.462325662s)
	I1124 13:59:20.586463  597884 kic.go:203] duration metric: took 4.462505537s to extract preloaded images to volume ...
	W1124 13:59:20.586564  597884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:59:20.586608  597884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:59:20.586680  597884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:59:20.644116  597884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-098307 --name default-k8s-diff-port-098307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --network default-k8s-diff-port-098307 --ip 192.168.103.2 --volume default-k8s-diff-port-098307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:59:20.945626  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Running}}
	I1124 13:59:20.963079  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 13:59:20.981912  597884 cli_runner.go:164] Run: docker exec default-k8s-diff-port-098307 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:59:21.029902  597884 oci.go:144] the created container "default-k8s-diff-port-098307" has a running status.
	I1124 13:59:21.029934  597884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa...
	I1124 13:59:21.044615  597884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:59:21.069992  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 13:59:21.086792  597884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:59:21.086833  597884 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-098307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:59:21.146349  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 13:59:21.166786  597884 machine.go:94] provisionDockerMachine start ...
	I1124 13:59:21.166911  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:21.186849  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:21.187228  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:21.187251  597884 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:59:21.188063  597884 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54600->127.0.0.1:33453: read: connection reset by peer
	I1124 13:59:24.331450  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-098307
	
	I1124 13:59:24.331483  597884 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-098307"
	I1124 13:59:24.331543  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.348366  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:24.348576  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:24.348591  597884 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-098307 && echo "default-k8s-diff-port-098307" | sudo tee /etc/hostname
	I1124 13:59:24.499844  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-098307
	
	I1124 13:59:24.499937  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.518001  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:24.518269  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:24.518294  597884 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-098307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-098307/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-098307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:59:24.658953  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:59:24.658980  597884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:59:24.659010  597884 ubuntu.go:190] setting up certificates
	I1124 13:59:24.659027  597884 provision.go:84] configureAuth start
	I1124 13:59:24.659077  597884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-098307
	I1124 13:59:24.677011  597884 provision.go:143] copyHostCerts
	I1124 13:59:24.677061  597884 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:59:24.677068  597884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:59:24.677144  597884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:59:24.677239  597884 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:59:24.677247  597884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:59:24.677273  597884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:59:24.677337  597884 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:59:24.677344  597884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:59:24.677367  597884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:59:24.677434  597884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-098307 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-098307 localhost minikube]
	I1124 13:59:24.700861  597884 provision.go:177] copyRemoteCerts
	I1124 13:59:24.700915  597884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:59:24.700946  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.716539  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:24.816479  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:59:24.835966  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 13:59:24.853382  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:59:24.869978  597884 provision.go:87] duration metric: took 210.935424ms to configureAuth
	I1124 13:59:24.870003  597884 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:59:24.870163  597884 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:24.870290  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.887462  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:24.887816  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:24.887849  597884 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:59:25.187470  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:59:25.187500  597884 machine.go:97] duration metric: took 4.020690156s to provisionDockerMachine
	I1124 13:59:25.187511  597884 client.go:176] duration metric: took 9.63563611s to LocalClient.Create
	I1124 13:59:25.187534  597884 start.go:167] duration metric: took 9.635698843s to libmachine.API.Create "default-k8s-diff-port-098307"
	I1124 13:59:25.187556  597884 start.go:293] postStartSetup for "default-k8s-diff-port-098307" (driver="docker")
	I1124 13:59:25.187571  597884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:59:25.187645  597884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:59:25.187700  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:25.205459  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:25.309061  597884 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:59:25.312570  597884 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:59:25.312600  597884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:59:25.312612  597884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:59:25.312658  597884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:59:25.312756  597884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:59:25.312879  597884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:59:25.320637  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:59:25.340727  597884 start.go:296] duration metric: took 153.136219ms for postStartSetup
	I1124 13:59:25.341145  597884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-098307
	
	
	==> CRI-O <==
	Nov 24 13:58:50 old-k8s-version-551674 crio[568]: time="2025-11-24T13:58:50.08308365Z" level=info msg="Created container 4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw/kubernetes-dashboard" id=db952c3d-7506-40d4-af35-7a7ac31d1a32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:50 old-k8s-version-551674 crio[568]: time="2025-11-24T13:58:50.083706017Z" level=info msg="Starting container: 4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6" id=909cb119-63ba-48ab-a59e-655e5ed19c69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:50 old-k8s-version-551674 crio[568]: time="2025-11-24T13:58:50.085779944Z" level=info msg="Started container" PID=1749 containerID=4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw/kubernetes-dashboard id=909cb119-63ba-48ab-a59e-655e5ed19c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac9f84d7e86e725eaee2146a048eaaf76bf03f5c14c72cbdb03065cdf4e154a6
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.188433321Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b54cb3c-c75b-442d-95cd-d6f35a192a23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.221492295Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=40cf8845-2709-4a00-a4f0-da1aef49a5dd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.222579619Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=003223c1-9e0f-43a7-8ddb-1b713944408b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.222729241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.260799028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.260985545Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/86de01fd18228979a7e84ef511951261ad38b7ed08269173465d7ee809a47125/merged/etc/passwd: no such file or directory"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.261010106Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/86de01fd18228979a7e84ef511951261ad38b7ed08269173465d7ee809a47125/merged/etc/group: no such file or directory"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.261218843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.288600881Z" level=info msg="Created container d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0: kube-system/storage-provisioner/storage-provisioner" id=003223c1-9e0f-43a7-8ddb-1b713944408b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.289164661Z" level=info msg="Starting container: d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0" id=702d543c-9b5c-4a81-8925-2944c4c2faa2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.291210685Z" level=info msg="Started container" PID=1773 containerID=d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0 description=kube-system/storage-provisioner/storage-provisioner id=702d543c-9b5c-4a81-8925-2944c4c2faa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0940035ae999d4cca36241f4f94d82950d7fc1192bfdf77972b5812cd5683fa
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.080441884Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b61a63b3-fe98-4e6c-af03-6db3350c6e56 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.081381917Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=48905c63-f000-4400-9c26-9a1a6710bf02 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.082417443Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper" id=968c2269-6774-4488-ac5d-2117f61efd8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.082578884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.089342398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.089790257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.123266827Z" level=info msg="Created container 058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper" id=968c2269-6774-4488-ac5d-2117f61efd8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.123795218Z" level=info msg="Starting container: 058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90" id=789f37af-e8ff-4f52-9312-2a7114ab50e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.125627385Z" level=info msg="Started container" PID=1789 containerID=058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper id=789f37af-e8ff-4f52-9312-2a7114ab50e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e8596c7c2dfb5b8b38d52424ea817d07c51e1900c9cf39612bc5cf5fc063b34
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.206324185Z" level=info msg="Removing container: f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2" id=fe9ce977-4106-4f4d-9e6e-94004bd3dcbf name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.216614873Z" level=info msg="Removed container f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper" id=fe9ce977-4106-4f4d-9e6e-94004bd3dcbf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	058b37251a9c8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           19 seconds ago      Exited              dashboard-metrics-scraper   2                   3e8596c7c2dfb       dashboard-metrics-scraper-5f989dc9cf-tbfcd       kubernetes-dashboard
	d08df8429929b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   f0940035ae999       storage-provisioner                              kube-system
	4092d50c993eb       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   36 seconds ago      Running             kubernetes-dashboard        0                   ac9f84d7e86e7       kubernetes-dashboard-8694d4445c-lfgtw            kubernetes-dashboard
	5945066c1bf43       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           55 seconds ago      Running             coredns                     0                   75a8aadbc75ef       coredns-5dd5756b68-swk4w                         kube-system
	cc6aaefc6b92b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   b3584f76e52a7       busybox                                          default
	79cc18514458a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   95b32b83ef344       kindnet-sz57p                                    kube-system
	6a650aa5e54cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   f0940035ae999       storage-provisioner                              kube-system
	20e66ad022041       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           55 seconds ago      Running             kube-proxy                  0                   275dca16bd233       kube-proxy-trn2x                                 kube-system
	9b964e539060b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   2e4a9466da4b0       etcd-old-k8s-version-551674                      kube-system
	e8b2cdae759a7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   db905614b9c4b       kube-scheduler-old-k8s-version-551674            kube-system
	53b732b3e825d       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   f1ef3bf979018       kube-apiserver-old-k8s-version-551674            kube-system
	5609559ca1585       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   3adf05dfaa797       kube-controller-manager-old-k8s-version-551674   kube-system
	
	
	==> coredns [5945066c1bf4374e52728a69c48a556dbb99eb23b787887fcdc19f79b27dbdf1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53073 - 20079 "HINFO IN 3266694113286623893.392185349628822583. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.479082221s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-551674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-551674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-551674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_57_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-551674
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-551674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3bfe263a-8777-48b0-84b7-18ab723a148d
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-5dd5756b68-swk4w                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-old-k8s-version-551674                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m2s
	  kube-system                 kindnet-sz57p                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-old-k8s-version-551674             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-controller-manager-old-k8s-version-551674    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 kube-proxy-trn2x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-old-k8s-version-551674             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m1s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-tbfcd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lfgtw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         44s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m1s               kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m1s               kubelet          Node old-k8s-version-551674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m1s               kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s               node-controller  Node old-k8s-version-551674 event: Registered Node old-k8s-version-551674 in Controller
	  Normal  NodeReady                96s                kubelet          Node old-k8s-version-551674 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node old-k8s-version-551674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           44s                node-controller  Node old-k8s-version-551674 event: Registered Node old-k8s-version-551674 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [9b964e539060bf1c1a1da0a82bb08dc64769689e2441ea480573fa9ab7f2a79c] <==
	{"level":"info","ts":"2025-11-24T13:58:26.638333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:58:26.640916Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T13:58:26.641161Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T13:58:26.641224Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T13:58:26.641221Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T13:58:26.641298Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T13:58:28.530339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T13:58:28.530412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T13:58:28.530432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:58:28.530448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.530457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.530469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.530483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.53182Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-551674 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T13:58:28.53192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:58:28.531947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:58:28.532577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T13:58:28.53261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T13:58:28.533374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T13:58:28.533376Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-11-24T13:59:01.59847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.317639ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:59:01.598717Z","caller":"traceutil/trace.go:171","msg":"trace[516285939] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:600; }","duration":"205.602004ms","start":"2025-11-24T13:59:01.393096Z","end":"2025-11-24T13:59:01.598698Z","steps":["trace[516285939] 'range keys from in-memory index tree'  (duration: 205.289709ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:59:01.599017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.308434ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766388419872808 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.187af6062c4768a8\" mod_revision:498 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.187af6062c4768a8\" value_size:676 lease:6571766388419872123 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.187af6062c4768a8\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:59:01.599132Z","caller":"traceutil/trace.go:171","msg":"trace[1067413998] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"249.233324ms","start":"2025-11-24T13:59:01.349872Z","end":"2025-11-24T13:59:01.599105Z","steps":["trace[1067413998] 'process raft request'  (duration: 125.321101ms)","trace[1067413998] 'compare'  (duration: 123.18927ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:59:02.363554Z","caller":"traceutil/trace.go:171","msg":"trace[287514987] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"165.965787ms","start":"2025-11-24T13:59:02.197568Z","end":"2025-11-24T13:59:02.363533Z","steps":["trace[287514987] 'process raft request'  (duration: 165.835808ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:59:26 up  2:41,  0 user,  load average: 2.78, 2.95, 2.01
	Linux old-k8s-version-551674 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79cc18514458ab77dd20c134c4befb59891d55b0c82fe66dfc6a6a3676870f3c] <==
	I1124 13:58:30.719060       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:58:30.719297       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:58:30.719458       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:58:30.719473       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:58:30.719491       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:58:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:58:30.920654       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:58:30.920744       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:58:30.920759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:58:31.018054       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:58:31.220851       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:58:31.220871       1 metrics.go:72] Registering metrics
	I1124 13:58:31.220939       1 controller.go:711] "Syncing nftables rules"
	I1124 13:58:40.920415       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:58:40.920467       1 main.go:301] handling current node
	I1124 13:58:50.920656       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:58:50.920683       1 main.go:301] handling current node
	I1124 13:59:00.920694       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:59:00.920722       1 main.go:301] handling current node
	I1124 13:59:10.921120       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:59:10.921160       1 main.go:301] handling current node
	I1124 13:59:20.926001       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:59:20.926034       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53b732b3e825d4856ae0fadf78757166b2bc4c473356786dbb86c08c262503b3] <==
	I1124 13:58:29.563772       1 aggregator.go:166] initial CRD sync complete...
	I1124 13:58:29.563780       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 13:58:29.563787       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:58:29.563797       1 cache.go:39] Caches are synced for autoregister controller
	E1124 13:58:29.564067       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1124 13:58:29.564691       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 13:58:29.565610       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 13:58:29.565614       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 13:58:30.341644       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 13:58:30.370477       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 13:58:30.393274       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:58:30.401949       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:58:30.409434       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 13:58:30.458859       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.115.201"}
	I1124 13:58:30.468416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:58:30.475909       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.159.12"}
	E1124 13:58:39.564065       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I1124 13:58:42.633925       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 13:58:42.733476       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 13:58:42.783955       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:58:42.783956       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1124 13:58:49.565019       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1124 13:58:59.566165       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1124 13:59:09.567250       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1124 13:59:19.567909       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [5609559ca15853175b8f8a04131a0ec91f834eb1788f2cb29ae4934ff72c93a0] <==
	I1124 13:58:42.637970       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1124 13:58:42.845677       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:58:42.881079       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:58:42.881107       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 13:58:42.890946       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-tbfcd"
	I1124 13:58:42.891078       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-lfgtw"
	I1124 13:58:42.893071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="562.969613ms"
	I1124 13:58:42.893588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="159.869µs"
	I1124 13:58:42.896712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="258.927582ms"
	I1124 13:58:42.896806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="260.100196ms"
	I1124 13:58:42.902743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.90437ms"
	I1124 13:58:42.902835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.894µs"
	I1124 13:58:42.904097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.337519ms"
	I1124 13:58:42.904198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.2µs"
	I1124 13:58:42.909714       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.904µs"
	I1124 13:58:42.916611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.578µs"
	I1124 13:58:46.155542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.204µs"
	I1124 13:58:47.166223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.154µs"
	I1124 13:58:48.171698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.735µs"
	I1124 13:58:50.186121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.675729ms"
	I1124 13:58:50.186224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.33µs"
	I1124 13:59:07.219181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.927µs"
	I1124 13:59:09.330057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.185679ms"
	I1124 13:59:09.330171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.305µs"
	I1124 13:59:13.211120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.599µs"
	
	
	==> kube-proxy [20e66ad022041ccc62db2d900a48a8abc2e3d419daf8b5de2ef5544962096bfd] <==
	I1124 13:58:30.484431       1 server_others.go:69] "Using iptables proxy"
	I1124 13:58:30.493986       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 13:58:30.515650       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:58:30.518212       1 server_others.go:152] "Using iptables Proxier"
	I1124 13:58:30.518237       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 13:58:30.518242       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 13:58:30.518267       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 13:58:30.518523       1 server.go:846] "Version info" version="v1.28.0"
	I1124 13:58:30.518542       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:30.519867       1 config.go:97] "Starting endpoint slice config controller"
	I1124 13:58:30.520187       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 13:58:30.519939       1 config.go:315] "Starting node config controller"
	I1124 13:58:30.520286       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 13:58:30.520017       1 config.go:188] "Starting service config controller"
	I1124 13:58:30.520440       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 13:58:30.621165       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 13:58:30.621228       1 shared_informer.go:318] Caches are synced for node config
	I1124 13:58:30.621372       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e8b2cdae759a78a53d4bb761e54084097e234b3a4625fcace0612d86af8ce8e7] <==
	I1124 13:58:26.987175       1 serving.go:348] Generated self-signed cert in-memory
	W1124 13:58:29.497677       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 13:58:29.497713       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 13:58:29.497727       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 13:58:29.497736       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 13:58:29.537902       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 13:58:29.537943       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:29.539668       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:58:29.539768       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 13:58:29.540663       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 13:58:29.540707       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 13:58:29.640239       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013212     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a793b031-3ed9-4323-be38-0ae496db715b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lfgtw\" (UID: \"a793b031-3ed9-4323-be38-0ae496db715b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw"
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013266     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5jx\" (UniqueName: \"kubernetes.io/projected/a793b031-3ed9-4323-be38-0ae496db715b-kube-api-access-5b5jx\") pod \"kubernetes-dashboard-8694d4445c-lfgtw\" (UID: \"a793b031-3ed9-4323-be38-0ae496db715b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw"
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013302     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxlb\" (UniqueName: \"kubernetes.io/projected/1908d968-359a-41f3-bd6f-87b896ff4185-kube-api-access-5sxlb\") pod \"dashboard-metrics-scraper-5f989dc9cf-tbfcd\" (UID: \"1908d968-359a-41f3-bd6f-87b896ff4185\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd"
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013404     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1908d968-359a-41f3-bd6f-87b896ff4185-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-tbfcd\" (UID: \"1908d968-359a-41f3-bd6f-87b896ff4185\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd"
	Nov 24 13:58:46 old-k8s-version-551674 kubelet[736]: I1124 13:58:46.144042     736 scope.go:117] "RemoveContainer" containerID="b09eae88526ea09bb3fa4a323285da52aa56043994840eab8ef0542fafd17b5d"
	Nov 24 13:58:47 old-k8s-version-551674 kubelet[736]: I1124 13:58:47.150271     736 scope.go:117] "RemoveContainer" containerID="b09eae88526ea09bb3fa4a323285da52aa56043994840eab8ef0542fafd17b5d"
	Nov 24 13:58:47 old-k8s-version-551674 kubelet[736]: I1124 13:58:47.150514     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:58:47 old-k8s-version-551674 kubelet[736]: E1124 13:58:47.150940     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:58:48 old-k8s-version-551674 kubelet[736]: I1124 13:58:48.155343     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:58:48 old-k8s-version-551674 kubelet[736]: E1124 13:58:48.155762     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:58:53 old-k8s-version-551674 kubelet[736]: I1124 13:58:53.200806     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:58:53 old-k8s-version-551674 kubelet[736]: E1124 13:58:53.201187     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:59:01 old-k8s-version-551674 kubelet[736]: I1124 13:59:01.187935     736 scope.go:117] "RemoveContainer" containerID="6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c"
	Nov 24 13:59:01 old-k8s-version-551674 kubelet[736]: I1124 13:59:01.260058     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw" podStartSLOduration=12.437420431 podCreationTimestamp="2025-11-24 13:58:42 +0000 UTC" firstStartedPulling="2025-11-24 13:58:43.220562073 +0000 UTC m=+17.229212508" lastFinishedPulling="2025-11-24 13:58:50.043130898 +0000 UTC m=+24.051781340" observedRunningTime="2025-11-24 13:58:50.176206278 +0000 UTC m=+24.184856722" watchObservedRunningTime="2025-11-24 13:59:01.259989263 +0000 UTC m=+35.268639707"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: I1124 13:59:07.079741     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: I1124 13:59:07.205114     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: I1124 13:59:07.205336     736 scope.go:117] "RemoveContainer" containerID="058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: E1124 13:59:07.205716     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:59:13 old-k8s-version-551674 kubelet[736]: I1124 13:59:13.200202     736 scope.go:117] "RemoveContainer" containerID="058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	Nov 24 13:59:13 old-k8s-version-551674 kubelet[736]: E1124 13:59:13.200602     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 13:59:23 old-k8s-version-551674 kubelet[736]: I1124 13:59:23.087369     736 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: kubelet.service: Consumed 1.523s CPU time.
	
	
	==> kubernetes-dashboard [4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6] <==
	2025/11/24 13:58:50 Starting overwatch
	2025/11/24 13:58:50 Using namespace: kubernetes-dashboard
	2025/11/24 13:58:50 Using in-cluster config to connect to apiserver
	2025/11/24 13:58:50 Using secret token for csrf signing
	2025/11/24 13:58:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 13:58:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 13:58:50 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 13:58:50 Generating JWE encryption key
	2025/11/24 13:58:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 13:58:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 13:58:50 Initializing JWE encryption key from synchronized object
	2025/11/24 13:58:50 Creating in-cluster Sidecar client
	2025/11/24 13:58:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 13:58:50 Serving insecurely on HTTP port: 9090
	2025/11/24 13:59:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c] <==
	I1124 13:58:30.452771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 13:59:00.456668       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0] <==
	I1124 13:59:01.302286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:59:01.309853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:59:01.309883       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 13:59:18.788752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:59:18.788836       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7422d999-6149-47d4-9886-755e6760dd69", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-551674_99b641db-ec67-4cf6-8922-ef877a65a63b became leader
	I1124 13:59:18.788926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551674_99b641db-ec67-4cf6-8922-ef877a65a63b!
	I1124 13:59:18.889235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551674_99b641db-ec67-4cf6-8922-ef877a65a63b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551674 -n old-k8s-version-551674
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551674 -n old-k8s-version-551674: exit status 2 (351.681143ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-551674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-551674
helpers_test.go:243: (dbg) docker inspect old-k8s-version-551674:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207",
	        "Created": "2025-11-24T13:57:09.159057998Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 585065,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:58:19.886142631Z",
	            "FinishedAt": "2025-11-24T13:58:19.027800386Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/hostname",
	        "HostsPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/hosts",
	        "LogPath": "/var/lib/docker/containers/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207/cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207-json.log",
	        "Name": "/old-k8s-version-551674",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-551674:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-551674",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "cffc3242ebb7a478e0db0542c263e257bf174d889dd1e0cc2141101376465207",
	                "LowerDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/merged",
	                "UpperDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/diff",
	                "WorkDir": "/var/lib/docker/overlay2/b94f56af59637dbd0200d859a82956dca17af029a5f0461e9cc730804b642613/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-551674",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-551674/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-551674",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-551674",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-551674",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "5bd227cc8b21234fb34a28c0df04506457095635e0abd24c0add46e4f453b14c",
	            "SandboxKey": "/var/run/docker/netns/5bd227cc8b21",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33438"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33439"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33442"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33440"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33441"
	                    }
	                ]
	            },
	            "Networks": {
	                "old-k8s-version-551674": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "584350b1ae0057925436f12a069654d8b9b77ec40acdead63d77442ee50e6e01",
	                    "EndpointID": "3be3704f81ab6243ef9958900d279082527904b41a2a8e67bd652468d4842eba",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "e2:3b:fe:b9:f6:d8",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-551674",
	                        "cffc3242ebb7"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674: exit status 2 (318.355957ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-551674 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-551674 logs -n 25: (1.086056161s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p cilium-165759 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ ssh     │ -p cilium-165759 sudo crio config                                                                                                                                                                                                             │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │                     │
	│ delete  │ -p cilium-165759                                                                                                                                                                                                                              │ cilium-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:57 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:57 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p old-k8s-version-551674 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p no-preload-495729 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-551674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ addons  │ enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:59:15
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:59:15.358537  597884 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:15.358630  597884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:15.358640  597884 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:15.358644  597884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:15.358869  597884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:59:15.359384  597884 out.go:368] Setting JSON to false
	I1124 13:59:15.360564  597884 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9702,"bootTime":1763983053,"procs":324,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:59:15.360618  597884 start.go:143] virtualization: kvm guest
	I1124 13:59:15.362234  597884 out.go:179] * [default-k8s-diff-port-098307] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:59:15.363455  597884 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:59:15.363497  597884 notify.go:221] Checking for updates...
	I1124 13:59:15.365834  597884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:59:15.367034  597884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:59:15.368029  597884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:59:15.370400  597884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:59:15.371498  597884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:59:15.372856  597884 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:15.372965  597884 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:15.373045  597884 config.go:182] Loaded profile config "old-k8s-version-551674": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1124 13:59:15.373124  597884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:59:15.397564  597884 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:59:15.397713  597884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:15.451882  597884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 13:59:15.441972191 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:15.452035  597884 docker.go:319] overlay module found
	I1124 13:59:15.454008  597884 out.go:179] * Using the docker driver based on user configuration
	I1124 13:59:15.454990  597884 start.go:309] selected driver: docker
	I1124 13:59:15.455007  597884 start.go:927] validating driver "docker" against <nil>
	I1124 13:59:15.455021  597884 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:59:15.455628  597884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:15.518269  597884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 13:59:15.507535267 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:15.518485  597884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:59:15.518729  597884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 13:59:15.520052  597884 out.go:179] * Using Docker driver with root privileges
	I1124 13:59:15.521137  597884 cni.go:84] Creating CNI manager for ""
	I1124 13:59:15.521222  597884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:15.521239  597884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 13:59:15.521319  597884 start.go:353] cluster config:
	{Name:default-k8s-diff-port-098307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-098307 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:15.522677  597884 out.go:179] * Starting "default-k8s-diff-port-098307" primary control-plane node in "default-k8s-diff-port-098307" cluster
	I1124 13:59:15.523763  597884 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:59:15.524951  597884 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:59:15.525935  597884 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:15.525974  597884 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:59:15.525997  597884 cache.go:65] Caching tarball of preloaded images
	I1124 13:59:15.526024  597884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:59:15.526131  597884 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:59:15.526151  597884 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:59:15.526285  597884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/default-k8s-diff-port-098307/config.json ...
	I1124 13:59:15.526319  597884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/default-k8s-diff-port-098307/config.json: {Name:mk097457f2cf281c13f8600d6e4b69a245edfe55 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:15.549153  597884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:59:15.549174  597884 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:59:15.549189  597884 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:59:15.549218  597884 start.go:360] acquireMachinesLock for default-k8s-diff-port-098307: {Name:mk2fcf2089e89fdca360031b39be958b7ce01e9d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:59:15.549312  597884 start.go:364] duration metric: took 76.428µs to acquireMachinesLock for "default-k8s-diff-port-098307"
	I1124 13:59:15.549341  597884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-098307 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-098307 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:59:15.549422  597884 start.go:125] createHost starting for "" (driver="docker")
	I1124 13:59:15.238695  592938 out.go:252]   - Configuring RBAC rules ...
	I1124 13:59:15.238843  592938 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1124 13:59:15.242224  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1124 13:59:15.247326  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1124 13:59:15.249800  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1124 13:59:15.253046  592938 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1124 13:59:15.255391  592938 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1124 13:59:15.602636  592938 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1124 13:59:16.020136  592938 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1124 13:59:16.602409  592938 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1124 13:59:16.603749  592938 kubeadm.go:319] 
	I1124 13:59:16.603864  592938 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1124 13:59:16.603874  592938 kubeadm.go:319] 
	I1124 13:59:16.603997  592938 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1124 13:59:16.604026  592938 kubeadm.go:319] 
	I1124 13:59:16.604077  592938 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1124 13:59:16.604145  592938 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1124 13:59:16.604208  592938 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1124 13:59:16.604216  592938 kubeadm.go:319] 
	I1124 13:59:16.604309  592938 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1124 13:59:16.604324  592938 kubeadm.go:319] 
	I1124 13:59:16.604383  592938 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1124 13:59:16.604392  592938 kubeadm.go:319] 
	I1124 13:59:16.604463  592938 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1124 13:59:16.604549  592938 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1124 13:59:16.604627  592938 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1124 13:59:16.604635  592938 kubeadm.go:319] 
	I1124 13:59:16.604739  592938 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1124 13:59:16.604827  592938 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1124 13:59:16.604835  592938 kubeadm.go:319] 
	I1124 13:59:16.604946  592938 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token i0ni2q.905unti7i418tiul \
	I1124 13:59:16.605072  592938 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c \
	I1124 13:59:16.605099  592938 kubeadm.go:319] 	--control-plane 
	I1124 13:59:16.605106  592938 kubeadm.go:319] 
	I1124 13:59:16.605207  592938 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1124 13:59:16.605214  592938 kubeadm.go:319] 
	I1124 13:59:16.605321  592938 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token i0ni2q.905unti7i418tiul \
	I1124 13:59:16.605477  592938 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:8508f5e374ce1614712f271f50423a392652f73206d8a868cc7aac45c80e4a0c 
	I1124 13:59:16.608171  592938 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1044-gcp\n", err: exit status 1
	I1124 13:59:16.608324  592938 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1124 13:59:16.608353  592938 cni.go:84] Creating CNI manager for ""
	I1124 13:59:16.608365  592938 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:16.609947  592938 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1124 13:59:16.611036  592938 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1124 13:59:16.615311  592938 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1124 13:59:16.615331  592938 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1124 13:59:16.630146  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1124 13:59:16.872457  592938 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1124 13:59:16.872575  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:16.872584  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-456660 minikube.k8s.io/updated_at=2025_11_24T13_59_16_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab minikube.k8s.io/name=embed-certs-456660 minikube.k8s.io/primary=true
	I1124 13:59:16.885542  592938 ops.go:34] apiserver oom_adj: -16
	I1124 13:59:16.961862  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:17.462916  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:15.924277  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:15.924922  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:59:15.924993  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:15.925055  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:15.956392  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:15.956420  549693 cri.go:89] found id: ""
	I1124 13:59:15.956430  549693 logs.go:282] 1 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696]
	I1124 13:59:15.956486  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:15.962049  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:15.962111  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:16.011872  549693 cri.go:89] found id: ""
	I1124 13:59:16.012071  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.012099  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:16.012110  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:16.012183  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:16.046443  549693 cri.go:89] found id: ""
	I1124 13:59:16.046468  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.046479  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:16.046487  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:16.046540  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:16.073728  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:16.073754  549693 cri.go:89] found id: ""
	I1124 13:59:16.073764  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:16.073830  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:16.078391  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:16.078455  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:16.113265  549693 cri.go:89] found id: ""
	I1124 13:59:16.113290  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.113309  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:16.113317  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:16.113374  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:16.144874  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:16.144908  549693 cri.go:89] found id: ""
	I1124 13:59:16.144919  549693 logs.go:282] 1 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f]
	I1124 13:59:16.144984  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:16.149591  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:16.149639  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:16.179303  549693 cri.go:89] found id: ""
	I1124 13:59:16.179326  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.179337  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:16.179344  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:16.179394  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:16.209028  549693 cri.go:89] found id: ""
	I1124 13:59:16.209051  549693 logs.go:282] 0 containers: []
	W1124 13:59:16.209061  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:16.209075  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:16.209090  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:16.313612  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:16.313650  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:16.334144  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:16.334177  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:16.398857  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:16.398882  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:16.398911  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:16.437749  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:16.437788  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:16.499958  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:16.499992  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:16.530576  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:16.530614  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:16.598120  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:16.598154  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:19.135804  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:19.136221  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:59:19.136287  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:19.136336  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:19.162240  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:19.162257  549693 cri.go:89] found id: ""
	I1124 13:59:19.162266  549693 logs.go:282] 1 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696]
	I1124 13:59:19.162308  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:19.166168  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:19.166233  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:19.191918  549693 cri.go:89] found id: ""
	I1124 13:59:19.191940  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.191948  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:19.191956  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:19.192005  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:19.215713  549693 cri.go:89] found id: ""
	I1124 13:59:19.215738  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.215747  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:19.215754  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:19.215796  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:19.241185  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:19.241209  549693 cri.go:89] found id: ""
	I1124 13:59:19.241220  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:19.241269  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:19.244960  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:19.245026  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:19.270257  549693 cri.go:89] found id: ""
	I1124 13:59:19.270276  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.270284  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:19.270289  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:19.270343  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:19.297085  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:19.297107  549693 cri.go:89] found id: ""
	I1124 13:59:19.297118  549693 logs.go:282] 1 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f]
	I1124 13:59:19.297172  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:19.301116  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:19.301179  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:19.327461  549693 cri.go:89] found id: ""
	I1124 13:59:19.327485  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.327492  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:19.327498  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:19.327540  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:19.352380  549693 cri.go:89] found id: ""
	I1124 13:59:19.352406  549693 logs.go:282] 0 containers: []
	W1124 13:59:19.352417  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:19.352429  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:19.352444  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:19.406834  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:19.406856  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:19.406872  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:19.438687  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:19.438715  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:19.492096  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:19.492126  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:19.520104  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:19.520128  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:19.579431  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:19.579456  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:19.608235  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:19.608262  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:19.694673  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:19.694700  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:15.551582  597884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 13:59:15.551837  597884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-098307" (driver="docker")
	I1124 13:59:15.551869  597884 client.go:173] LocalClient.Create starting
	I1124 13:59:15.551971  597884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 13:59:15.552008  597884 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:15.552026  597884 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:15.552077  597884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 13:59:15.552096  597884 main.go:143] libmachine: Decoding PEM data...
	I1124 13:59:15.552105  597884 main.go:143] libmachine: Parsing certificate...
	I1124 13:59:15.552404  597884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-098307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 13:59:15.570087  597884 cli_runner.go:211] docker network inspect default-k8s-diff-port-098307 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 13:59:15.570167  597884 network_create.go:284] running [docker network inspect default-k8s-diff-port-098307] to gather additional debugging logs...
	I1124 13:59:15.570189  597884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-098307
	W1124 13:59:15.588277  597884 cli_runner.go:211] docker network inspect default-k8s-diff-port-098307 returned with exit code 1
	I1124 13:59:15.588314  597884 network_create.go:287] error running [docker network inspect default-k8s-diff-port-098307]: docker network inspect default-k8s-diff-port-098307: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-098307 not found
	I1124 13:59:15.588326  597884 network_create.go:289] output of [docker network inspect default-k8s-diff-port-098307]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-098307 not found
	
	** /stderr **
	I1124 13:59:15.588428  597884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 13:59:15.606303  597884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 13:59:15.607437  597884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 13:59:15.608134  597884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 13:59:15.608659  597884 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-283ea71f66a5 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:b6:70:12:a2:88:dd} reservation:<nil>}
	I1124 13:59:15.609394  597884 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-95ddebcd3d89 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ba:53:32:2f:bb:ed} reservation:<nil>}
	I1124 13:59:15.610047  597884 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-584350b1ae00 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:72:e5:2a:e9:2d:0e} reservation:<nil>}
	I1124 13:59:15.610839  597884 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001efee90}
	I1124 13:59:15.610865  597884 network_create.go:124] attempt to create docker network default-k8s-diff-port-098307 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 13:59:15.610944  597884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 default-k8s-diff-port-098307
	I1124 13:59:15.663555  597884 network_create.go:108] docker network default-k8s-diff-port-098307 192.168.103.0/24 created
	I1124 13:59:15.663594  597884 kic.go:121] calculated static IP "192.168.103.2" for the "default-k8s-diff-port-098307" container
	I1124 13:59:15.663680  597884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 13:59:15.680995  597884 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-098307 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --label created_by.minikube.sigs.k8s.io=true
	I1124 13:59:15.698962  597884 oci.go:103] Successfully created a docker volume default-k8s-diff-port-098307
	I1124 13:59:15.699071  597884 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-098307-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --entrypoint /usr/bin/test -v default-k8s-diff-port-098307:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 13:59:16.123842  597884 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-098307
	I1124 13:59:16.123934  597884 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:16.123953  597884 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 13:59:16.124049  597884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-098307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 13:59:17.962606  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:18.462833  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:18.962953  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:19.462019  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:19.962247  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:20.462866  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:20.962711  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:21.462063  592938 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:21.529726  592938 kubeadm.go:1114] duration metric: took 4.657208845s to wait for elevateKubeSystemPrivileges
	I1124 13:59:21.529759  592938 kubeadm.go:403] duration metric: took 14.354940344s to StartCluster
	I1124 13:59:21.529783  592938 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:21.529856  592938 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:59:21.531521  592938 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:21.531717  592938 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:59:21.531736  592938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:59:21.531777  592938 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:59:21.531877  592938 addons.go:70] Setting storage-provisioner=true in profile "embed-certs-456660"
	I1124 13:59:21.531909  592938 addons.go:239] Setting addon storage-provisioner=true in "embed-certs-456660"
	I1124 13:59:21.531941  592938 host.go:66] Checking if "embed-certs-456660" exists ...
	I1124 13:59:21.531969  592938 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:21.532019  592938 addons.go:70] Setting default-storageclass=true in profile "embed-certs-456660"
	I1124 13:59:21.532054  592938 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-456660"
	I1124 13:59:21.532416  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:21.532504  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:21.534292  592938 out.go:179] * Verifying Kubernetes components...
	I1124 13:59:21.535359  592938 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:21.555669  592938 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:59:21.556613  592938 addons.go:239] Setting addon default-storageclass=true in "embed-certs-456660"
	I1124 13:59:21.556655  592938 host.go:66] Checking if "embed-certs-456660" exists ...
	I1124 13:59:21.557008  592938 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:59:21.557029  592938 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:59:21.557102  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:21.557159  592938 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 13:59:21.589530  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:21.590772  592938 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:59:21.590792  592938 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:59:21.590854  592938 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 13:59:21.613209  592938 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33448 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 13:59:21.629112  592938 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:59:21.680659  592938 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:21.722752  592938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:59:21.736612  592938 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:59:21.818069  592938 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1124 13:59:21.819528  592938 node_ready.go:35] waiting up to 6m0s for node "embed-certs-456660" to be "Ready" ...
	I1124 13:59:22.000817  592938 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 13:59:22.002909  592938 addons.go:530] duration metric: took 471.127686ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 13:59:22.322280  592938 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-456660" context rescaled to 1 replicas
	I1124 13:59:22.212187  549693 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1124 13:59:22.212659  549693 api_server.go:269] stopped: https://192.168.76.2:8443/healthz: Get "https://192.168.76.2:8443/healthz": dial tcp 192.168.76.2:8443: connect: connection refused
	I1124 13:59:22.212713  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1124 13:59:22.212759  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1124 13:59:22.241554  549693 cri.go:89] found id: "89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:22.241575  549693 cri.go:89] found id: ""
	I1124 13:59:22.241585  549693 logs.go:282] 1 containers: [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696]
	I1124 13:59:22.241647  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:22.246552  549693 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1124 13:59:22.246611  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1124 13:59:22.273701  549693 cri.go:89] found id: ""
	I1124 13:59:22.273729  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.273739  549693 logs.go:284] No container was found matching "etcd"
	I1124 13:59:22.273747  549693 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1124 13:59:22.273810  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1124 13:59:22.299054  549693 cri.go:89] found id: ""
	I1124 13:59:22.299084  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.299093  549693 logs.go:284] No container was found matching "coredns"
	I1124 13:59:22.299098  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1124 13:59:22.299148  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1124 13:59:22.324688  549693 cri.go:89] found id: "09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:22.324703  549693 cri.go:89] found id: ""
	I1124 13:59:22.324710  549693 logs.go:282] 1 containers: [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff]
	I1124 13:59:22.324749  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:22.328375  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1124 13:59:22.328439  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1124 13:59:22.354635  549693 cri.go:89] found id: ""
	I1124 13:59:22.354663  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.354673  549693 logs.go:284] No container was found matching "kube-proxy"
	I1124 13:59:22.354694  549693 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1124 13:59:22.354759  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1124 13:59:22.383006  549693 cri.go:89] found id: "df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:22.383029  549693 cri.go:89] found id: ""
	I1124 13:59:22.383039  549693 logs.go:282] 1 containers: [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f]
	I1124 13:59:22.383100  549693 ssh_runner.go:195] Run: which crictl
	I1124 13:59:22.387706  549693 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1124 13:59:22.387784  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1124 13:59:22.418231  549693 cri.go:89] found id: ""
	I1124 13:59:22.418260  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.418272  549693 logs.go:284] No container was found matching "kindnet"
	I1124 13:59:22.418280  549693 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1124 13:59:22.418337  549693 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1124 13:59:22.448747  549693 cri.go:89] found id: ""
	I1124 13:59:22.448781  549693 logs.go:282] 0 containers: []
	W1124 13:59:22.448791  549693 logs.go:284] No container was found matching "storage-provisioner"
	I1124 13:59:22.448803  549693 logs.go:123] Gathering logs for dmesg ...
	I1124 13:59:22.448817  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1124 13:59:22.466954  549693 logs.go:123] Gathering logs for describe nodes ...
	I1124 13:59:22.466981  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1124 13:59:22.526252  549693 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1124 13:59:22.526276  549693 logs.go:123] Gathering logs for kube-apiserver [89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696] ...
	I1124 13:59:22.526291  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 89e3f961c6df2973e09087d6d2b6be846ea3406ee999bc335e9e0f6516db0696"
	I1124 13:59:22.557484  549693 logs.go:123] Gathering logs for kube-scheduler [09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff] ...
	I1124 13:59:22.557511  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 09474ec71ef80f8f9036b351250c36774a1251b7d7957550d815fc37dbb0f4ff"
	I1124 13:59:22.611689  549693 logs.go:123] Gathering logs for kube-controller-manager [df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f] ...
	I1124 13:59:22.611719  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/local/bin/crictl logs --tail 400 df08eccce5717e621bf01ad84d40ae0cec6de290402e562d268ae5529f56350f"
	I1124 13:59:22.640792  549693 logs.go:123] Gathering logs for CRI-O ...
	I1124 13:59:22.640815  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1124 13:59:22.701013  549693 logs.go:123] Gathering logs for container status ...
	I1124 13:59:22.701045  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1124 13:59:22.732465  549693 logs.go:123] Gathering logs for kubelet ...
	I1124 13:59:22.732490  549693 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1124 13:59:20.586427  597884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-098307:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.462325662s)
	I1124 13:59:20.586463  597884 kic.go:203] duration metric: took 4.462505537s to extract preloaded images to volume ...
	W1124 13:59:20.586564  597884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 13:59:20.586608  597884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 13:59:20.586680  597884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 13:59:20.644116  597884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-098307 --name default-k8s-diff-port-098307 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-098307 --network default-k8s-diff-port-098307 --ip 192.168.103.2 --volume default-k8s-diff-port-098307:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 13:59:20.945626  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Running}}
	I1124 13:59:20.963079  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 13:59:20.981912  597884 cli_runner.go:164] Run: docker exec default-k8s-diff-port-098307 stat /var/lib/dpkg/alternatives/iptables
	I1124 13:59:21.029902  597884 oci.go:144] the created container "default-k8s-diff-port-098307" has a running status.
	I1124 13:59:21.029934  597884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa...
	I1124 13:59:21.044615  597884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 13:59:21.069992  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 13:59:21.086792  597884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 13:59:21.086833  597884 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-098307 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 13:59:21.146349  597884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 13:59:21.166786  597884 machine.go:94] provisionDockerMachine start ...
	I1124 13:59:21.166911  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:21.186849  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:21.187228  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:21.187251  597884 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 13:59:21.188063  597884 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:54600->127.0.0.1:33453: read: connection reset by peer
	I1124 13:59:24.331450  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-098307
	
	I1124 13:59:24.331483  597884 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-098307"
	I1124 13:59:24.331543  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.348366  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:24.348576  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:24.348591  597884 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-098307 && echo "default-k8s-diff-port-098307" | sudo tee /etc/hostname
	I1124 13:59:24.499844  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-098307
	
	I1124 13:59:24.499937  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.518001  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:24.518269  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:24.518294  597884 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-098307' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-098307/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-098307' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 13:59:24.658953  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 13:59:24.658980  597884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 13:59:24.659010  597884 ubuntu.go:190] setting up certificates
	I1124 13:59:24.659027  597884 provision.go:84] configureAuth start
	I1124 13:59:24.659077  597884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-098307
	I1124 13:59:24.677011  597884 provision.go:143] copyHostCerts
	I1124 13:59:24.677061  597884 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 13:59:24.677068  597884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 13:59:24.677144  597884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 13:59:24.677239  597884 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 13:59:24.677247  597884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 13:59:24.677273  597884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 13:59:24.677337  597884 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 13:59:24.677344  597884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 13:59:24.677367  597884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 13:59:24.677434  597884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-098307 san=[127.0.0.1 192.168.103.2 default-k8s-diff-port-098307 localhost minikube]
	I1124 13:59:24.700861  597884 provision.go:177] copyRemoteCerts
	I1124 13:59:24.700915  597884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 13:59:24.700946  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.716539  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:24.816479  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 13:59:24.835966  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1124 13:59:24.853382  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 13:59:24.869978  597884 provision.go:87] duration metric: took 210.935424ms to configureAuth
	I1124 13:59:24.870003  597884 ubuntu.go:206] setting minikube options for container-runtime
	I1124 13:59:24.870163  597884 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:24.870290  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:24.887462  597884 main.go:143] libmachine: Using SSH client type: native
	I1124 13:59:24.887816  597884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33453 <nil> <nil>}
	I1124 13:59:24.887849  597884 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 13:59:25.187470  597884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 13:59:25.187500  597884 machine.go:97] duration metric: took 4.020690156s to provisionDockerMachine
	I1124 13:59:25.187511  597884 client.go:176] duration metric: took 9.63563611s to LocalClient.Create
	I1124 13:59:25.187534  597884 start.go:167] duration metric: took 9.635698843s to libmachine.API.Create "default-k8s-diff-port-098307"
	I1124 13:59:25.187556  597884 start.go:293] postStartSetup for "default-k8s-diff-port-098307" (driver="docker")
	I1124 13:59:25.187571  597884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 13:59:25.187645  597884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 13:59:25.187700  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:25.205459  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:25.309061  597884 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 13:59:25.312570  597884 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 13:59:25.312600  597884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 13:59:25.312612  597884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 13:59:25.312658  597884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 13:59:25.312756  597884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 13:59:25.312879  597884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 13:59:25.320637  597884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 13:59:25.340727  597884 start.go:296] duration metric: took 153.136219ms for postStartSetup
	I1124 13:59:25.341145  597884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-098307
	I1124 13:59:25.359149  597884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/default-k8s-diff-port-098307/config.json ...
	I1124 13:59:25.359434  597884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:59:25.359479  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:25.377616  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:25.481388  597884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 13:59:25.486380  597884 start.go:128] duration metric: took 9.936944633s to createHost
	I1124 13:59:25.486472  597884 start.go:83] releasing machines lock for "default-k8s-diff-port-098307", held for 9.937143835s
	I1124 13:59:25.486559  597884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-098307
	I1124 13:59:25.507528  597884 ssh_runner.go:195] Run: cat /version.json
	I1124 13:59:25.507589  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:25.507609  597884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 13:59:25.507720  597884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 13:59:25.528551  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:25.529067  597884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33453 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 13:59:25.699870  597884 ssh_runner.go:195] Run: systemctl --version
	I1124 13:59:25.706480  597884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 13:59:25.744609  597884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 13:59:25.749259  597884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 13:59:25.749310  597884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 13:59:25.774230  597884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 13:59:25.774256  597884 start.go:496] detecting cgroup driver to use...
	I1124 13:59:25.774284  597884 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 13:59:25.774338  597884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 13:59:25.790447  597884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 13:59:25.803376  597884 docker.go:218] disabling cri-docker service (if available) ...
	I1124 13:59:25.803414  597884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 13:59:25.822629  597884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 13:59:25.842634  597884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 13:59:25.942221  597884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 13:59:26.041690  597884 docker.go:234] disabling docker service ...
	I1124 13:59:26.041741  597884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 13:59:26.060678  597884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 13:59:26.074440  597884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 13:59:26.165820  597884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 13:59:26.249189  597884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 13:59:26.263029  597884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 13:59:26.278630  597884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 13:59:26.278692  597884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.290012  597884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 13:59:26.290072  597884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.299160  597884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.307594  597884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.317451  597884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 13:59:26.325555  597884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.333921  597884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.348020  597884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 13:59:26.356606  597884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 13:59:26.364037  597884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 13:59:26.370920  597884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:26.456198  597884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 13:59:26.589120  597884 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 13:59:26.589188  597884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 13:59:26.593832  597884 start.go:564] Will wait 60s for crictl version
	I1124 13:59:26.593917  597884 ssh_runner.go:195] Run: which crictl
	I1124 13:59:26.597498  597884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 13:59:26.623217  597884 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 13:59:26.623309  597884 ssh_runner.go:195] Run: crio --version
	I1124 13:59:26.655217  597884 ssh_runner.go:195] Run: crio --version
	I1124 13:59:26.694085  597884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	
	
	==> CRI-O <==
	Nov 24 13:58:50 old-k8s-version-551674 crio[568]: time="2025-11-24T13:58:50.08308365Z" level=info msg="Created container 4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6: kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw/kubernetes-dashboard" id=db952c3d-7506-40d4-af35-7a7ac31d1a32 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:58:50 old-k8s-version-551674 crio[568]: time="2025-11-24T13:58:50.083706017Z" level=info msg="Starting container: 4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6" id=909cb119-63ba-48ab-a59e-655e5ed19c69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:58:50 old-k8s-version-551674 crio[568]: time="2025-11-24T13:58:50.085779944Z" level=info msg="Started container" PID=1749 containerID=4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6 description=kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw/kubernetes-dashboard id=909cb119-63ba-48ab-a59e-655e5ed19c69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=ac9f84d7e86e725eaee2146a048eaaf76bf03f5c14c72cbdb03065cdf4e154a6
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.188433321Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=0b54cb3c-c75b-442d-95cd-d6f35a192a23 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.221492295Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=40cf8845-2709-4a00-a4f0-da1aef49a5dd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.222579619Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=003223c1-9e0f-43a7-8ddb-1b713944408b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.222729241Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.260799028Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.260985545Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/86de01fd18228979a7e84ef511951261ad38b7ed08269173465d7ee809a47125/merged/etc/passwd: no such file or directory"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.261010106Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/86de01fd18228979a7e84ef511951261ad38b7ed08269173465d7ee809a47125/merged/etc/group: no such file or directory"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.261218843Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.288600881Z" level=info msg="Created container d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0: kube-system/storage-provisioner/storage-provisioner" id=003223c1-9e0f-43a7-8ddb-1b713944408b name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.289164661Z" level=info msg="Starting container: d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0" id=702d543c-9b5c-4a81-8925-2944c4c2faa2 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:01 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:01.291210685Z" level=info msg="Started container" PID=1773 containerID=d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0 description=kube-system/storage-provisioner/storage-provisioner id=702d543c-9b5c-4a81-8925-2944c4c2faa2 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f0940035ae999d4cca36241f4f94d82950d7fc1192bfdf77972b5812cd5683fa
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.080441884Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=b61a63b3-fe98-4e6c-af03-6db3350c6e56 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.081381917Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=48905c63-f000-4400-9c26-9a1a6710bf02 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.082417443Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper" id=968c2269-6774-4488-ac5d-2117f61efd8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.082578884Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.089342398Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.089790257Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.123266827Z" level=info msg="Created container 058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper" id=968c2269-6774-4488-ac5d-2117f61efd8a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.123795218Z" level=info msg="Starting container: 058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90" id=789f37af-e8ff-4f52-9312-2a7114ab50e1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.125627385Z" level=info msg="Started container" PID=1789 containerID=058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper id=789f37af-e8ff-4f52-9312-2a7114ab50e1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=3e8596c7c2dfb5b8b38d52424ea817d07c51e1900c9cf39612bc5cf5fc063b34
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.206324185Z" level=info msg="Removing container: f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2" id=fe9ce977-4106-4f4d-9e6e-94004bd3dcbf name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 13:59:07 old-k8s-version-551674 crio[568]: time="2025-11-24T13:59:07.216614873Z" level=info msg="Removed container f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd/dashboard-metrics-scraper" id=fe9ce977-4106-4f4d-9e6e-94004bd3dcbf name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	058b37251a9c8       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           20 seconds ago       Exited              dashboard-metrics-scraper   2                   3e8596c7c2dfb       dashboard-metrics-scraper-5f989dc9cf-tbfcd       kubernetes-dashboard
	d08df8429929b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   f0940035ae999       storage-provisioner                              kube-system
	4092d50c993eb       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago       Running             kubernetes-dashboard        0                   ac9f84d7e86e7       kubernetes-dashboard-8694d4445c-lfgtw            kubernetes-dashboard
	5945066c1bf43       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           57 seconds ago       Running             coredns                     0                   75a8aadbc75ef       coredns-5dd5756b68-swk4w                         kube-system
	cc6aaefc6b92b       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   b3584f76e52a7       busybox                                          default
	79cc18514458a       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   95b32b83ef344       kindnet-sz57p                                    kube-system
	6a650aa5e54cd       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   f0940035ae999       storage-provisioner                              kube-system
	20e66ad022041       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           57 seconds ago       Running             kube-proxy                  0                   275dca16bd233       kube-proxy-trn2x                                 kube-system
	9b964e539060b       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   2e4a9466da4b0       etcd-old-k8s-version-551674                      kube-system
	e8b2cdae759a7       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   db905614b9c4b       kube-scheduler-old-k8s-version-551674            kube-system
	53b732b3e825d       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   f1ef3bf979018       kube-apiserver-old-k8s-version-551674            kube-system
	5609559ca1585       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   3adf05dfaa797       kube-controller-manager-old-k8s-version-551674   kube-system
	
	
	==> coredns [5945066c1bf4374e52728a69c48a556dbb99eb23b787887fcdc19f79b27dbdf1] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 4c7f44b73086be760ec9e64204f63c5cc5a952c8c1c55ba0b41d8fc3315ce3c7d0259d04847cb8b4561043d4549603f3bccfd9b397eeb814eef159d244d26f39
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:53073 - 20079 "HINFO IN 3266694113286623893.392185349628822583. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.479082221s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-551674
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-551674
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=old-k8s-version-551674
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_57_26_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:57:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-551674
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:59:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:57:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:59:00 +0000   Mon, 24 Nov 2025 13:58:29 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    old-k8s-version-551674
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                3bfe263a-8777-48b0-84b7-18ab723a148d
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-5dd5756b68-swk4w                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-old-k8s-version-551674                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-sz57p                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-old-k8s-version-551674             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m5s
	  kube-system                 kube-controller-manager-old-k8s-version-551674    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 kube-proxy-trn2x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-old-k8s-version-551674             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m3s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-tbfcd        0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-lfgtw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m3s               kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s               kubelet          Node old-k8s-version-551674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s               kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m3s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node old-k8s-version-551674 event: Registered Node old-k8s-version-551674 in Controller
	  Normal  NodeReady                98s                kubelet          Node old-k8s-version-551674 status is now: NodeReady
	  Normal  Starting                 62s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x8 over 62s)  kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)  kubelet          Node old-k8s-version-551674 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x8 over 62s)  kubelet          Node old-k8s-version-551674 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           46s                node-controller  Node old-k8s-version-551674 event: Registered Node old-k8s-version-551674 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [9b964e539060bf1c1a1da0a82bb08dc64769689e2441ea480573fa9ab7f2a79c] <==
	{"level":"info","ts":"2025-11-24T13:58:26.638333Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-24T13:58:26.640916Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-24T13:58:26.641161Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"dfc97eb0aae75b33","initial-advertise-peer-urls":["https://192.168.94.2:2380"],"listen-peer-urls":["https://192.168.94.2:2380"],"advertise-client-urls":["https://192.168.94.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.94.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-24T13:58:26.641224Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-24T13:58:26.641221Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T13:58:26.641298Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.94.2:2380"}
	{"level":"info","ts":"2025-11-24T13:58:28.530339Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-24T13:58:28.530412Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-24T13:58:28.530432Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgPreVoteResp from dfc97eb0aae75b33 at term 2"}
	{"level":"info","ts":"2025-11-24T13:58:28.530448Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became candidate at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.530457Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 received MsgVoteResp from dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.530469Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"dfc97eb0aae75b33 became leader at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.530483Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: dfc97eb0aae75b33 elected leader dfc97eb0aae75b33 at term 3"}
	{"level":"info","ts":"2025-11-24T13:58:28.53182Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"dfc97eb0aae75b33","local-member-attributes":"{Name:old-k8s-version-551674 ClientURLs:[https://192.168.94.2:2379]}","request-path":"/0/members/dfc97eb0aae75b33/attributes","cluster-id":"da400bbece288f5a","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-24T13:58:28.53192Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:58:28.531947Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-24T13:58:28.532577Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-24T13:58:28.53261Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-24T13:58:28.533374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-11-24T13:58:28.533376Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.94.2:2379"}
	{"level":"warn","ts":"2025-11-24T13:59:01.59847Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"205.317639ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:59:01.598717Z","caller":"traceutil/trace.go:171","msg":"trace[516285939] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:600; }","duration":"205.602004ms","start":"2025-11-24T13:59:01.393096Z","end":"2025-11-24T13:59:01.598698Z","steps":["trace[516285939] 'range keys from in-memory index tree'  (duration: 205.289709ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:59:01.599017Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"123.308434ms","expected-duration":"100ms","prefix":"","request":"header:<ID:6571766388419872808 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/kube-system/storage-provisioner.187af6062c4768a8\" mod_revision:498 > success:<request_put:<key:\"/registry/events/kube-system/storage-provisioner.187af6062c4768a8\" value_size:676 lease:6571766388419872123 >> failure:<request_range:<key:\"/registry/events/kube-system/storage-provisioner.187af6062c4768a8\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:59:01.599132Z","caller":"traceutil/trace.go:171","msg":"trace[1067413998] transaction","detail":"{read_only:false; response_revision:601; number_of_response:1; }","duration":"249.233324ms","start":"2025-11-24T13:59:01.349872Z","end":"2025-11-24T13:59:01.599105Z","steps":["trace[1067413998] 'process raft request'  (duration: 125.321101ms)","trace[1067413998] 'compare'  (duration: 123.18927ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:59:02.363554Z","caller":"traceutil/trace.go:171","msg":"trace[287514987] transaction","detail":"{read_only:false; response_revision:602; number_of_response:1; }","duration":"165.965787ms","start":"2025-11-24T13:59:02.197568Z","end":"2025-11-24T13:59:02.363533Z","steps":["trace[287514987] 'process raft request'  (duration: 165.835808ms)"],"step_count":1}
	
	
	==> kernel <==
	 13:59:28 up  2:41,  0 user,  load average: 2.78, 2.95, 2.01
	Linux old-k8s-version-551674 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [79cc18514458ab77dd20c134c4befb59891d55b0c82fe66dfc6a6a3676870f3c] <==
	I1124 13:58:30.719060       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:58:30.719297       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 13:58:30.719458       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:58:30.719473       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:58:30.719491       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:58:30Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:58:30.920654       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:58:30.920744       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:58:30.920759       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:58:31.018054       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:58:31.220851       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:58:31.220871       1 metrics.go:72] Registering metrics
	I1124 13:58:31.220939       1 controller.go:711] "Syncing nftables rules"
	I1124 13:58:40.920415       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:58:40.920467       1 main.go:301] handling current node
	I1124 13:58:50.920656       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:58:50.920683       1 main.go:301] handling current node
	I1124 13:59:00.920694       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:59:00.920722       1 main.go:301] handling current node
	I1124 13:59:10.921120       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:59:10.921160       1 main.go:301] handling current node
	I1124 13:59:20.926001       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1124 13:59:20.926034       1 main.go:301] handling current node
	
	
	==> kube-apiserver [53b732b3e825d4856ae0fadf78757166b2bc4c473356786dbb86c08c262503b3] <==
	I1124 13:58:29.563772       1 aggregator.go:166] initial CRD sync complete...
	I1124 13:58:29.563780       1 autoregister_controller.go:141] Starting autoregister controller
	I1124 13:58:29.563787       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:58:29.563797       1 cache.go:39] Caches are synced for autoregister controller
	E1124 13:58:29.564067       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-low","catch-all","exempt","global-default","leader-election","node-high","system","workload-high"] items=[{},{},{},{},{},{},{},{}]
	I1124 13:58:29.564691       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1124 13:58:29.565610       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1124 13:58:29.565614       1 shared_informer.go:318] Caches are synced for configmaps
	I1124 13:58:30.341644       1 controller.go:624] quota admission added evaluator for: namespaces
	I1124 13:58:30.370477       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1124 13:58:30.393274       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:58:30.401949       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:58:30.409434       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1124 13:58:30.458859       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.115.201"}
	I1124 13:58:30.468416       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:58:30.475909       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.159.12"}
	E1124 13:58:39.564065       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	I1124 13:58:42.633925       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1124 13:58:42.733476       1 controller.go:624] quota admission added evaluator for: endpoints
	I1124 13:58:42.783955       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:58:42.783956       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E1124 13:58:49.565019       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high","system"] items=[{},{},{},{},{},{},{},{}]
	E1124 13:58:59.566165       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1124 13:59:09.567250       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["system","workload-high","workload-low","catch-all","exempt","global-default","leader-election","node-high"] items=[{},{},{},{},{},{},{},{}]
	E1124 13:59:19.567909       1 apf_controller.go:419] "Unable to derive new concurrency limits" err="impossible: ran out of bounds to consider in bound-constrained problem" plNames=["global-default","leader-election","node-high","system","workload-high","workload-low","catch-all","exempt"] items=[{},{},{},{},{},{},{},{}]
	
	
	==> kube-controller-manager [5609559ca15853175b8f8a04131a0ec91f834eb1788f2cb29ae4934ff72c93a0] <==
	I1124 13:58:42.637970       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dashboard-metrics-scraper-5f989dc9cf to 1"
	I1124 13:58:42.845677       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:58:42.881079       1 shared_informer.go:318] Caches are synced for garbage collector
	I1124 13:58:42.881107       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1124 13:58:42.890946       1 event.go:307] "Event occurred" object="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dashboard-metrics-scraper-5f989dc9cf-tbfcd"
	I1124 13:58:42.891078       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-lfgtw"
	I1124 13:58:42.893071       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="562.969613ms"
	I1124 13:58:42.893588       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="159.869µs"
	I1124 13:58:42.896712       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="258.927582ms"
	I1124 13:58:42.896806       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="260.100196ms"
	I1124 13:58:42.902743       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="5.90437ms"
	I1124 13:58:42.902835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="50.894µs"
	I1124 13:58:42.904097       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="7.337519ms"
	I1124 13:58:42.904198       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.2µs"
	I1124 13:58:42.909714       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="34.904µs"
	I1124 13:58:42.916611       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="35.578µs"
	I1124 13:58:46.155542       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="103.204µs"
	I1124 13:58:47.166223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="61.154µs"
	I1124 13:58:48.171698       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="117.735µs"
	I1124 13:58:50.186121       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="10.675729ms"
	I1124 13:58:50.186224       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="39.33µs"
	I1124 13:59:07.219181       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="65.927µs"
	I1124 13:59:09.330057       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="7.185679ms"
	I1124 13:59:09.330171       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="71.305µs"
	I1124 13:59:13.211120       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="71.599µs"
	
	
	==> kube-proxy [20e66ad022041ccc62db2d900a48a8abc2e3d419daf8b5de2ef5544962096bfd] <==
	I1124 13:58:30.484431       1 server_others.go:69] "Using iptables proxy"
	I1124 13:58:30.493986       1 node.go:141] Successfully retrieved node IP: 192.168.94.2
	I1124 13:58:30.515650       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:58:30.518212       1 server_others.go:152] "Using iptables Proxier"
	I1124 13:58:30.518237       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1124 13:58:30.518242       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1124 13:58:30.518267       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1124 13:58:30.518523       1 server.go:846] "Version info" version="v1.28.0"
	I1124 13:58:30.518542       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:30.519867       1 config.go:97] "Starting endpoint slice config controller"
	I1124 13:58:30.520187       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1124 13:58:30.519939       1 config.go:315] "Starting node config controller"
	I1124 13:58:30.520286       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1124 13:58:30.520017       1 config.go:188] "Starting service config controller"
	I1124 13:58:30.520440       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1124 13:58:30.621165       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1124 13:58:30.621228       1 shared_informer.go:318] Caches are synced for node config
	I1124 13:58:30.621372       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [e8b2cdae759a78a53d4bb761e54084097e234b3a4625fcace0612d86af8ce8e7] <==
	I1124 13:58:26.987175       1 serving.go:348] Generated self-signed cert in-memory
	W1124 13:58:29.497677       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 13:58:29.497713       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 13:58:29.497727       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 13:58:29.497736       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 13:58:29.537902       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1124 13:58:29.537943       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:58:29.539668       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 13:58:29.539768       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1124 13:58:29.540663       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1124 13:58:29.540707       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1124 13:58:29.640239       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013212     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/a793b031-3ed9-4323-be38-0ae496db715b-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-lfgtw\" (UID: \"a793b031-3ed9-4323-be38-0ae496db715b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw"
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013266     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b5jx\" (UniqueName: \"kubernetes.io/projected/a793b031-3ed9-4323-be38-0ae496db715b-kube-api-access-5b5jx\") pod \"kubernetes-dashboard-8694d4445c-lfgtw\" (UID: \"a793b031-3ed9-4323-be38-0ae496db715b\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw"
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013302     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sxlb\" (UniqueName: \"kubernetes.io/projected/1908d968-359a-41f3-bd6f-87b896ff4185-kube-api-access-5sxlb\") pod \"dashboard-metrics-scraper-5f989dc9cf-tbfcd\" (UID: \"1908d968-359a-41f3-bd6f-87b896ff4185\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd"
	Nov 24 13:58:43 old-k8s-version-551674 kubelet[736]: I1124 13:58:43.013404     736 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1908d968-359a-41f3-bd6f-87b896ff4185-tmp-volume\") pod \"dashboard-metrics-scraper-5f989dc9cf-tbfcd\" (UID: \"1908d968-359a-41f3-bd6f-87b896ff4185\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd"
	Nov 24 13:58:46 old-k8s-version-551674 kubelet[736]: I1124 13:58:46.144042     736 scope.go:117] "RemoveContainer" containerID="b09eae88526ea09bb3fa4a323285da52aa56043994840eab8ef0542fafd17b5d"
	Nov 24 13:58:47 old-k8s-version-551674 kubelet[736]: I1124 13:58:47.150271     736 scope.go:117] "RemoveContainer" containerID="b09eae88526ea09bb3fa4a323285da52aa56043994840eab8ef0542fafd17b5d"
	Nov 24 13:58:47 old-k8s-version-551674 kubelet[736]: I1124 13:58:47.150514     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:58:47 old-k8s-version-551674 kubelet[736]: E1124 13:58:47.150940     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:58:48 old-k8s-version-551674 kubelet[736]: I1124 13:58:48.155343     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:58:48 old-k8s-version-551674 kubelet[736]: E1124 13:58:48.155762     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:58:53 old-k8s-version-551674 kubelet[736]: I1124 13:58:53.200806     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:58:53 old-k8s-version-551674 kubelet[736]: E1124 13:58:53.201187     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:59:01 old-k8s-version-551674 kubelet[736]: I1124 13:59:01.187935     736 scope.go:117] "RemoveContainer" containerID="6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c"
	Nov 24 13:59:01 old-k8s-version-551674 kubelet[736]: I1124 13:59:01.260058     736 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-lfgtw" podStartSLOduration=12.437420431 podCreationTimestamp="2025-11-24 13:58:42 +0000 UTC" firstStartedPulling="2025-11-24 13:58:43.220562073 +0000 UTC m=+17.229212508" lastFinishedPulling="2025-11-24 13:58:50.043130898 +0000 UTC m=+24.051781340" observedRunningTime="2025-11-24 13:58:50.176206278 +0000 UTC m=+24.184856722" watchObservedRunningTime="2025-11-24 13:59:01.259989263 +0000 UTC m=+35.268639707"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: I1124 13:59:07.079741     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: I1124 13:59:07.205114     736 scope.go:117] "RemoveContainer" containerID="f50e347c4b30fd524adb2392e73b4099a54c0089a58776aa38239fb4ed355ab2"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: I1124 13:59:07.205336     736 scope.go:117] "RemoveContainer" containerID="058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	Nov 24 13:59:07 old-k8s-version-551674 kubelet[736]: E1124 13:59:07.205716     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:59:13 old-k8s-version-551674 kubelet[736]: I1124 13:59:13.200202     736 scope.go:117] "RemoveContainer" containerID="058b37251a9c872df1f4d274fae8dd67de3dd59bfe3548eb6ad77a3eefbd1c90"
	Nov 24 13:59:13 old-k8s-version-551674 kubelet[736]: E1124 13:59:13.200602     736 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-tbfcd_kubernetes-dashboard(1908d968-359a-41f3-bd6f-87b896ff4185)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-tbfcd" podUID="1908d968-359a-41f3-bd6f-87b896ff4185"
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 13:59:23 old-k8s-version-551674 kubelet[736]: I1124 13:59:23.087369     736 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 13:59:23 old-k8s-version-551674 systemd[1]: kubelet.service: Consumed 1.523s CPU time.
	
	
	==> kubernetes-dashboard [4092d50c993eba017f68490cce7dfa72cbda4a6ca12b00a4ef41475527e0dbd6] <==
	2025/11/24 13:58:50 Using namespace: kubernetes-dashboard
	2025/11/24 13:58:50 Using in-cluster config to connect to apiserver
	2025/11/24 13:58:50 Using secret token for csrf signing
	2025/11/24 13:58:50 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 13:58:50 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 13:58:50 Successful initial request to the apiserver, version: v1.28.0
	2025/11/24 13:58:50 Generating JWE encryption key
	2025/11/24 13:58:50 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 13:58:50 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 13:58:50 Initializing JWE encryption key from synchronized object
	2025/11/24 13:58:50 Creating in-cluster Sidecar client
	2025/11/24 13:58:50 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 13:58:50 Serving insecurely on HTTP port: 9090
	2025/11/24 13:59:20 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 13:58:50 Starting overwatch
	
	
	==> storage-provisioner [6a650aa5e54cdab0b1e4c8209675695989447069f4a73748f4310f040430f50c] <==
	I1124 13:58:30.452771       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 13:59:00.456668       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [d08df8429929bc9c5bafd803df7535916fa219a21f1a5928b33cccf8ac1b25c0] <==
	I1124 13:59:01.302286       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:59:01.309853       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:59:01.309883       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1124 13:59:18.788752       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:59:18.788836       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7422d999-6149-47d4-9886-755e6760dd69", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-551674_99b641db-ec67-4cf6-8922-ef877a65a63b became leader
	I1124 13:59:18.788926       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551674_99b641db-ec67-4cf6-8922-ef877a65a63b!
	I1124 13:59:18.889235       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-551674_99b641db-ec67-4cf6-8922-ef877a65a63b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551674 -n old-k8s-version-551674
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-551674 -n old-k8s-version-551674: exit status 2 (349.375214ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-551674 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (259.719833ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:00Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-305966
helpers_test.go:243: (dbg) docker inspect newest-cni-305966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0",
	        "Created": "2025-11-24T13:59:37.467773592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 604459,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:59:37.503728022Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/hosts",
	        "LogPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0-json.log",
	        "Name": "/newest-cni-305966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-305966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-305966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0",
	                "LowerDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-305966",
	                "Source": "/var/lib/docker/volumes/newest-cni-305966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-305966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-305966",
	                "name.minikube.sigs.k8s.io": "newest-cni-305966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "ab587fdd7d616234b260e8c368de66cac41e9aae8d74013a81ca8a420977e0c7",
	            "SandboxKey": "/var/run/docker/netns/ab587fdd7d61",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33458"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33459"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33462"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33460"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33461"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-305966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b817ca8b27f62f3a3563cdb6a0b78b72617f6f646af87e5319081625ae16c4aa",
	                    "EndpointID": "5c56ecacf4bbaaa66dae452c646177e9cc5c75b8a350ae741d78f93c3a2720cd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "32:4a:da:cf:d9:a6",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-305966",
	                        "d5c8bb04c9a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-305966 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-551674 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p old-k8s-version-551674 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable metrics-server -p no-preload-495729 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ stop    │ -p no-preload-495729 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-551674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:59 UTC │
	│ addons  │ enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │                     │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:59:58
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:59:58.933470  608167 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:59:58.933741  608167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:58.933750  608167 out.go:374] Setting ErrFile to fd 2...
	I1124 13:59:58.933755  608167 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:59:58.933945  608167 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:59:58.934415  608167 out.go:368] Setting JSON to false
	I1124 13:59:58.935672  608167 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9746,"bootTime":1763983053,"procs":348,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:59:58.935733  608167 start.go:143] virtualization: kvm guest
	I1124 13:59:58.937235  608167 out.go:179] * [kubernetes-upgrade-061040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:59:58.938445  608167 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:59:58.938454  608167 notify.go:221] Checking for updates...
	I1124 13:59:58.940489  608167 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:59:58.941575  608167 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:59:58.942534  608167 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:59:58.943428  608167 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:59:58.944510  608167 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:59:58.946049  608167 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:58.946827  608167 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:59:58.970628  608167 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:59:58.970741  608167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:59.037317  608167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 13:59:59.023549166 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:59.037464  608167 docker.go:319] overlay module found
	I1124 13:59:59.040697  608167 out.go:179] * Using the docker driver based on existing profile
	I1124 13:59:59.041719  608167 start.go:309] selected driver: docker
	I1124 13:59:59.041739  608167 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-061040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-061040 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirm
warePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:59.041847  608167 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:59:59.042571  608167 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:59:59.113343  608167 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-24 13:59:59.101278973 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:59:59.113712  608167 cni.go:84] Creating CNI manager for ""
	I1124 13:59:59.113796  608167 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 13:59:59.113858  608167 start.go:353] cluster config:
	{Name:kubernetes-upgrade-061040 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:kubernetes-upgrade-061040 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgen
tPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:59:59.115700  608167 out.go:179] * Starting "kubernetes-upgrade-061040" primary control-plane node in "kubernetes-upgrade-061040" cluster
	I1124 13:59:59.117303  608167 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 13:59:59.118865  608167 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 13:59:59.120025  608167 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 13:59:59.120077  608167 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 13:59:59.120093  608167 cache.go:65] Caching tarball of preloaded images
	I1124 13:59:59.120108  608167 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 13:59:59.120188  608167 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 13:59:59.120202  608167 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 13:59:59.120324  608167 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/config.json ...
	I1124 13:59:59.147010  608167 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 13:59:59.147036  608167 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 13:59:59.147060  608167 cache.go:240] Successfully downloaded all kic artifacts
	I1124 13:59:59.147113  608167 start.go:360] acquireMachinesLock for kubernetes-upgrade-061040: {Name:mk822066604a822b31ca88692d52b6a7dc54f6f5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 13:59:59.147190  608167 start.go:364] duration metric: took 46.631µs to acquireMachinesLock for "kubernetes-upgrade-061040"
	I1124 13:59:59.147215  608167 start.go:96] Skipping create...Using existing machine configuration
	I1124 13:59:59.147222  608167 fix.go:54] fixHost starting: 
	I1124 13:59:59.147528  608167 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-061040 --format={{.State.Status}}
	I1124 13:59:59.166912  608167 fix.go:112] recreateIfNeeded on kubernetes-upgrade-061040: state=Running err=<nil>
	W1124 13:59:59.166940  608167 fix.go:138] unexpected machine state, will restart: <nil>
	I1124 13:59:57.008825  603544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:57.509082  603544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:58.009005  603544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:58.509125  603544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:59.008942  603544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:59.509026  603544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1124 13:59:59.581786  603544 kubeadm.go:1114] duration metric: took 4.68594324s to wait for elevateKubeSystemPrivileges
	I1124 13:59:59.581818  603544 kubeadm.go:403] duration metric: took 14.717701334s to StartCluster
	I1124 13:59:59.581837  603544 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:59.581914  603544 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:59:59.583814  603544 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 13:59:59.584089  603544 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 13:59:59.584105  603544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1124 13:59:59.584118  603544 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 13:59:59.584219  603544 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-305966"
	I1124 13:59:59.584240  603544 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-305966"
	I1124 13:59:59.584248  603544 addons.go:70] Setting default-storageclass=true in profile "newest-cni-305966"
	I1124 13:59:59.584278  603544 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 13:59:59.584287  603544 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-305966"
	I1124 13:59:59.584315  603544 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:59:59.584666  603544 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 13:59:59.584844  603544 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 13:59:59.585605  603544 out.go:179] * Verifying Kubernetes components...
	I1124 13:59:59.586811  603544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 13:59:59.612399  603544 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 13:59:59.612581  603544 addons.go:239] Setting addon default-storageclass=true in "newest-cni-305966"
	I1124 13:59:59.612631  603544 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 13:59:59.613088  603544 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 13:59:59.613660  603544 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:59:59.613679  603544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 13:59:59.613734  603544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 13:59:59.646601  603544 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 13:59:59.646625  603544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 13:59:59.646699  603544 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 13:59:59.647294  603544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 13:59:59.672646  603544 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33458 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 13:59:59.688560  603544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1124 13:59:59.740609  603544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 13:59:59.770258  603544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 13:59:59.802328  603544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 13:59:59.909882  603544 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1124 13:59:59.911584  603544 api_server.go:52] waiting for apiserver process to appear ...
	I1124 13:59:59.911646  603544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:00:00.103078  603544 api_server.go:72] duration metric: took 518.945137ms to wait for apiserver process to appear ...
	I1124 14:00:00.103104  603544 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:00:00.103124  603544 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 14:00:00.108503  603544 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 14:00:00.109439  603544 api_server.go:141] control plane version: v1.34.1
	I1124 14:00:00.109489  603544 api_server.go:131] duration metric: took 6.368139ms to wait for apiserver health ...
	I1124 14:00:00.109504  603544 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:00:00.109511  603544 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1124 14:00:00.110866  603544 addons.go:530] duration metric: took 526.742545ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1124 14:00:00.112603  603544 system_pods.go:59] 8 kube-system pods found
	I1124 14:00:00.112643  603544 system_pods.go:61] "coredns-66bc5c9577-z4d5k" [a925cbe1-f3d5-4821-a1bf-afc3d3ed1062] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:00:00.112660  603544 system_pods.go:61] "etcd-newest-cni-305966" [f603e9b8-89c7-4735-97bb-82e67ab5fccd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:00:00.112668  603544 system_pods.go:61] "kindnet-7c2kd" [353470b5-271a-4976-9823-aae696867ae3] Running
	I1124 14:00:00.112705  603544 system_pods.go:61] "kube-apiserver-newest-cni-305966" [4bbaeb61-1730-4352-815e-afc398299d99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:00:00.112714  603544 system_pods.go:61] "kube-controller-manager-newest-cni-305966" [caf78e4b-40b4-467b-ade5-44a85043db3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:00:00.112721  603544 system_pods.go:61] "kube-proxy-bwchr" [d1715fb6-8be2-493f-81c7-9e606cca9736] Running
	I1124 14:00:00.112729  603544 system_pods.go:61] "kube-scheduler-newest-cni-305966" [60d22afb-3af1-42aa-bce3-f4bc578e68ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:00:00.112739  603544 system_pods.go:61] "storage-provisioner" [408ded79-aabb-4020-867d-a7c3da485d56] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:00:00.112747  603544 system_pods.go:74] duration metric: took 3.235163ms to wait for pod list to return data ...
	I1124 14:00:00.112756  603544 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:00:00.115107  603544 default_sa.go:45] found service account: "default"
	I1124 14:00:00.115129  603544 default_sa.go:55] duration metric: took 2.36072ms for default service account to be created ...
	I1124 14:00:00.115143  603544 kubeadm.go:587] duration metric: took 531.015892ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:00:00.115159  603544 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:00:00.117373  603544 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 14:00:00.117397  603544 node_conditions.go:123] node cpu capacity is 8
	I1124 14:00:00.117416  603544 node_conditions.go:105] duration metric: took 2.247492ms to run NodePressure ...
	I1124 14:00:00.117430  603544 start.go:242] waiting for startup goroutines ...
	I1124 14:00:00.415448  603544 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-305966" context rescaled to 1 replicas
	I1124 14:00:00.415488  603544 start.go:247] waiting for cluster config update ...
	I1124 14:00:00.415504  603544 start.go:256] writing updated cluster config ...
	I1124 14:00:00.415822  603544 ssh_runner.go:195] Run: rm -f paused
	I1124 14:00:00.467749  603544 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 14:00:00.469820  603544 out.go:179] * Done! kubectl is now configured to use "newest-cni-305966" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.6750695Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.679421807Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=7d268700-9d90-45ac-98f4-f46d7f09857e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.680672717Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a42b9643-8c21-45f1-9cf7-ae717887eee2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.681369687Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.682291303Z" level=info msg="Ran pod sandbox a49877e9d51715791f304547c9ee33588ea6537fc6f8e1450a35178492525592 with infra container: kube-system/kube-proxy-bwchr/POD" id=7d268700-9d90-45ac-98f4-f46d7f09857e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.682663199Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.683579904Z" level=info msg="Ran pod sandbox 271028f86f8c0272940c67549aeeefe0b4ef7759da04a1b18f2b1c764292acd9 with infra container: kube-system/kindnet-7c2kd/POD" id=a42b9643-8c21-45f1-9cf7-ae717887eee2 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.684333879Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=9dd92afc-b4d7-449a-a3b6-25759789ec4b name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.684803079Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=a6e4886c-1521-4487-af61-61dfcfa3a3ca name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.685852044Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=d518b37a-6aed-4ab6-a820-53b322d93002 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.686162117Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=227b885b-3b26-42e4-8c37-8c2db610bc38 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.691441954Z" level=info msg="Creating container: kube-system/kube-proxy-bwchr/kube-proxy" id=c3fee6bb-8aa8-4b0e-97df-38d9a4202290 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.691565285Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.694341213Z" level=info msg="Creating container: kube-system/kindnet-7c2kd/kindnet-cni" id=722c7a34-1e8d-4499-a580-bb49345fae45 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.694428889Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.698794529Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.700589664Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.700794872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.701732126Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.783121651Z" level=info msg="Created container 6c8ce4aded0bd7ee0a64ee38ef0fa61c90b0bbaed7dd6da130b3c98d1b8e6a88: kube-system/kindnet-7c2kd/kindnet-cni" id=722c7a34-1e8d-4499-a580-bb49345fae45 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.784014699Z" level=info msg="Starting container: 6c8ce4aded0bd7ee0a64ee38ef0fa61c90b0bbaed7dd6da130b3c98d1b8e6a88" id=8d7d763d-9aa5-4259-9578-b0cc07478e78 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.786946922Z" level=info msg="Started container" PID=1581 containerID=6c8ce4aded0bd7ee0a64ee38ef0fa61c90b0bbaed7dd6da130b3c98d1b8e6a88 description=kube-system/kindnet-7c2kd/kindnet-cni id=8d7d763d-9aa5-4259-9578-b0cc07478e78 name=/runtime.v1.RuntimeService/StartContainer sandboxID=271028f86f8c0272940c67549aeeefe0b4ef7759da04a1b18f2b1c764292acd9
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.788524113Z" level=info msg="Created container 7553c8dc2cb2b19738c583c362b7e78578eae635fc4b7e8f5ba342d243659e3e: kube-system/kube-proxy-bwchr/kube-proxy" id=c3fee6bb-8aa8-4b0e-97df-38d9a4202290 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.789583295Z" level=info msg="Starting container: 7553c8dc2cb2b19738c583c362b7e78578eae635fc4b7e8f5ba342d243659e3e" id=fec36111-c30f-4a43-bab6-406ec0a77b57 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:59 newest-cni-305966 crio[777]: time="2025-11-24T13:59:59.794828059Z" level=info msg="Started container" PID=1582 containerID=7553c8dc2cb2b19738c583c362b7e78578eae635fc4b7e8f5ba342d243659e3e description=kube-system/kube-proxy-bwchr/kube-proxy id=fec36111-c30f-4a43-bab6-406ec0a77b57 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a49877e9d51715791f304547c9ee33588ea6537fc6f8e1450a35178492525592
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	6c8ce4aded0bd       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   2 seconds ago       Running             kindnet-cni               0                   271028f86f8c0       kindnet-7c2kd                               kube-system
	7553c8dc2cb2b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   2 seconds ago       Running             kube-proxy                0                   a49877e9d5171       kube-proxy-bwchr                            kube-system
	236c76f82b3b9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   57acb7e4c6cb7       kube-scheduler-newest-cni-305966            kube-system
	381ed71febed7       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   7c3d4a778762a       kube-apiserver-newest-cni-305966            kube-system
	79884f5ab4d0b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   3225bd38f92c2       kube-controller-manager-newest-cni-305966   kube-system
	32653f913ed0e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   5807c3a5fb8fc       etcd-newest-cni-305966                      kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-305966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-305966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=newest-cni-305966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-305966
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:59:53 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:59:54 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:59:54 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:59:54 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 13:59:54 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-305966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ecbd5efe-848f-483d-9396-2b651bf1384a
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-305966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-7c2kd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-305966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-305966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-bwchr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-305966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 1s    kube-proxy       
	  Normal  Starting                 8s    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s    kubelet          Node newest-cni-305966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s    kubelet          Node newest-cni-305966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s    kubelet          Node newest-cni-305966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s    node-controller  Node newest-cni-305966 event: Registered Node newest-cni-305966 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [32653f913ed0e9b9c5bc6823cb116821332d7ac557ad6c905f0212cc4ea89583] <==
	{"level":"warn","ts":"2025-11-24T13:59:50.648487Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43714","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.657294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43722","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.664310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43742","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.670332Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.676791Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43806","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.683184Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.690009Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.695875Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43846","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.703120Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.709981Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43886","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.715733Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43912","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.722627Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43930","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.729954Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.736941Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.743840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.750486Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43988","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.757159Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44010","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.764278Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.770583Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44040","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.777263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44062","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.783968Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.795827Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44092","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.802249Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44118","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.809305Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:50.858485Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:44164","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:00:01 up  2:42,  0 user,  load average: 2.89, 2.94, 2.04
	Linux newest-cni-305966 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [6c8ce4aded0bd7ee0a64ee38ef0fa61c90b0bbaed7dd6da130b3c98d1b8e6a88] <==
	I1124 14:00:00.024747       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:00.025017       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 14:00:00.025145       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:00.025159       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:00.025176       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:00Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:00.222255       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:00.222289       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:00.222312       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:00.222469       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	
	
	==> kube-apiserver [381ed71febed7b467f51ef7a14adfbe27b2afcad1595d420128ec6d228c30ab5] <==
	I1124 13:59:51.385450       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:59:51.388934       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 13:59:51.388983       1 aggregator.go:171] initial CRD sync complete...
	I1124 13:59:51.388993       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 13:59:51.389000       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 13:59:51.389006       1 cache.go:39] Caches are synced for autoregister controller
	I1124 13:59:51.390617       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:51.413709       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:59:52.279157       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:59:52.282801       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:59:52.282818       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:59:52.726737       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:59:52.758945       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:59:52.883292       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:59:52.888491       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1124 13:59:52.889365       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:59:52.892995       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:59:53.598841       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:59:54.016767       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:59:54.025779       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:59:54.033306       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:59:59.351048       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:59:59.604927       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:59.609960       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:59.653583       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [79884f5ab4d0b3f07c14e82ac490340402932e9db067d87af0d48e5ea488892f] <==
	I1124 13:59:58.553732       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:59:58.566772       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:59:58.573250       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="newest-cni-305966" podCIDRs=["10.42.0.0/24"]
	I1124 13:59:58.598482       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:59:58.598496       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 13:59:58.598707       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:59:58.598724       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:59:58.598752       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 13:59:58.598839       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:59:58.598920       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 13:59:58.598944       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1124 13:59:58.599041       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 13:59:58.601758       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 13:59:58.601769       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 13:59:58.601904       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:59:58.603058       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 13:59:58.603159       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:59:58.608093       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 13:59:58.608116       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 13:59:58.608742       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:59:58.608748       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 13:59:58.609220       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 13:59:58.610090       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:59:58.615519       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1124 13:59:58.636379       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [7553c8dc2cb2b19738c583c362b7e78578eae635fc4b7e8f5ba342d243659e3e] <==
	I1124 13:59:59.856609       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:59:59.930645       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:00.031053       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:00.031084       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 14:00:00.031196       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:00.051951       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:00.052031       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:00.058517       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:00.059075       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:00.059116       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:00.060920       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:00.060943       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:00.060985       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:00.060992       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:00.060997       1 config.go:200] "Starting service config controller"
	I1124 14:00:00.061013       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:00.061323       1 config.go:309] "Starting node config controller"
	I1124 14:00:00.061332       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:00.061339       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:00.161119       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:00.161195       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:00:00.161202       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [236c76f82b3b9947f520d178272f596df3c50f1e2a72e8d07a20c0619c01691b] <==
	E1124 13:59:51.347859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:59:51.348193       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:59:51.348299       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:59:51.348414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:59:51.348496       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:59:51.348780       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:59:51.348885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:59:51.349004       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:59:51.349066       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:59:51.349120       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:59:51.349158       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:59:51.349215       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:59:51.349346       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:59:51.349404       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:59:51.349617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:59:51.349704       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:59:52.150319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:59:52.183371       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:59:52.261701       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:59:52.275851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:59:52.370455       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:59:52.428752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:59:52.444037       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:59:52.545489       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	I1124 13:59:55.542829       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.863105    1319 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.907311    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.907524    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.907696    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.908051    1319 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: E1124 13:59:54.923247    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-305966\" already exists" pod="kube-system/etcd-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: E1124 13:59:54.929245    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-305966\" already exists" pod="kube-system/kube-apiserver-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: E1124 13:59:54.929431    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-305966\" already exists" pod="kube-system/kube-scheduler-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: E1124 13:59:54.929514    1319 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-305966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.974688    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-305966" podStartSLOduration=1.974665493 podStartE2EDuration="1.974665493s" podCreationTimestamp="2025-11-24 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:54.974340629 +0000 UTC m=+1.194141754" watchObservedRunningTime="2025-11-24 13:59:54.974665493 +0000 UTC m=+1.194466625"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.974854    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-305966" podStartSLOduration=1.9748481070000001 podStartE2EDuration="1.974848107s" podCreationTimestamp="2025-11-24 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:54.962343792 +0000 UTC m=+1.182144925" watchObservedRunningTime="2025-11-24 13:59:54.974848107 +0000 UTC m=+1.194649243"
	Nov 24 13:59:54 newest-cni-305966 kubelet[1319]: I1124 13:59:54.993821    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-305966" podStartSLOduration=1.993799071 podStartE2EDuration="1.993799071s" podCreationTimestamp="2025-11-24 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:54.982308745 +0000 UTC m=+1.202109881" watchObservedRunningTime="2025-11-24 13:59:54.993799071 +0000 UTC m=+1.213600197"
	Nov 24 13:59:55 newest-cni-305966 kubelet[1319]: I1124 13:59:55.004647    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-305966" podStartSLOduration=2.004627068 podStartE2EDuration="2.004627068s" podCreationTimestamp="2025-11-24 13:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:54.99400814 +0000 UTC m=+1.213809274" watchObservedRunningTime="2025-11-24 13:59:55.004627068 +0000 UTC m=+1.224428202"
	Nov 24 13:59:58 newest-cni-305966 kubelet[1319]: I1124 13:59:58.620340    1319 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 13:59:58 newest-cni-305966 kubelet[1319]: I1124 13:59:58.621046    1319 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408224    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1715fb6-8be2-493f-81c7-9e606cca9736-xtables-lock\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408258    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1715fb6-8be2-493f-81c7-9e606cca9736-lib-modules\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408284    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfpr8\" (UniqueName: \"kubernetes.io/projected/d1715fb6-8be2-493f-81c7-9e606cca9736-kube-api-access-tfpr8\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408318    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-lib-modules\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408368    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z96qv\" (UniqueName: \"kubernetes.io/projected/353470b5-271a-4976-9823-aae696867ae3-kube-api-access-z96qv\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408414    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-xtables-lock\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408441    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1715fb6-8be2-493f-81c7-9e606cca9736-kube-proxy\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.408463    1319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-cni-cfg\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 13:59:59 newest-cni-305966 kubelet[1319]: I1124 13:59:59.959088    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7c2kd" podStartSLOduration=0.959063776 podStartE2EDuration="959.063776ms" podCreationTimestamp="2025-11-24 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:59.945942944 +0000 UTC m=+6.165744077" watchObservedRunningTime="2025-11-24 13:59:59.959063776 +0000 UTC m=+6.178864909"
	Nov 24 14:00:00 newest-cni-305966 kubelet[1319]: I1124 14:00:00.781725    1319 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bwchr" podStartSLOduration=1.7817030219999999 podStartE2EDuration="1.781703022s" podCreationTimestamp="2025-11-24 13:59:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:59.959724805 +0000 UTC m=+6.179525929" watchObservedRunningTime="2025-11-24 14:00:00.781703022 +0000 UTC m=+7.001504157"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305966 -n newest-cni-305966
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-305966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-z4d5k storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner: exit status 1 (71.560211ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-z4d5k" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (264.480749ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:06Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-098307 describe deploy/metrics-server -n kube-system: exit status 1 (62.634784ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-098307 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-098307
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-098307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948",
	        "Created": "2025-11-24T13:59:20.659772726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 598967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:59:20.689973822Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/hostname",
	        "HostsPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/hosts",
	        "LogPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948-json.log",
	        "Name": "/default-k8s-diff-port-098307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-098307:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-098307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948",
	                "LowerDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-098307",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-098307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-098307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-098307",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-098307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "08773a4aa2a5aa19c150077333d0de7dd31668936ac6b251de8cff4bdb062cec",
	            "SandboxKey": "/var/run/docker/netns/08773a4aa2a5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33453"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33454"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33457"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33455"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33456"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-098307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c6a8563f604dbd2ac02c075d8fe7a50789753dd9a0a4910f48e583fa79e5934",
	                    "EndpointID": "79a636de9b83ee417e6fbcef4419f71ba25f2f7c0a24665f0fec33ec2cf79c82",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "2a:df:a5:c3:a4:7c",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-098307",
	                        "bd0eb14a7bb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-098307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-098307 logs -n 25: (1.031427011s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 14:00 UTC │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p newest-cni-305966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ delete  │ -p kubernetes-upgrade-061040                                                                                                                                                                                                                  │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-305966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ start   │ -p auto-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:00:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:00:06.639955  612215 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:00:06.640075  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640086  612215 out.go:374] Setting ErrFile to fd 2...
	I1124 14:00:06.640093  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640294  612215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:00:06.640720  612215 out.go:368] Setting JSON to false
	I1124 14:00:06.641938  612215 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9754,"bootTime":1763983053,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:00:06.641994  612215 start.go:143] virtualization: kvm guest
	I1124 14:00:06.643737  612215 out.go:179] * [auto-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:00:06.644898  612215 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:00:06.644920  612215 notify.go:221] Checking for updates...
	I1124 14:00:06.647963  612215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:00:06.648994  612215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:06.650029  612215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:00:06.651162  612215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:00:06.652296  612215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:00:06.653986  612215 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654117  612215 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654273  612215 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654410  612215 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:00:06.678126  612215 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:00:06.678224  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.738636  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.729329331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.738783  612215 docker.go:319] overlay module found
	I1124 14:00:06.740540  612215 out.go:179] * Using the docker driver based on user configuration
	I1124 14:00:06.741585  612215 start.go:309] selected driver: docker
	I1124 14:00:06.741600  612215 start.go:927] validating driver "docker" against <nil>
	I1124 14:00:06.741610  612215 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:00:06.742172  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.795665  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.785850803 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.795856  612215 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:00:06.796102  612215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:00:06.797705  612215 out.go:179] * Using Docker driver with root privileges
	I1124 14:00:06.798793  612215 cni.go:84] Creating CNI manager for ""
	I1124 14:00:06.798864  612215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:06.798878  612215 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:00:06.798993  612215 start.go:353] cluster config:
	{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 14:00:06.800311  612215 out.go:179] * Starting "auto-165759" primary control-plane node in "auto-165759" cluster
	I1124 14:00:06.801369  612215 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:00:06.802406  612215 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:00:06.803315  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:06.803344  612215 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:00:06.803357  612215 cache.go:65] Caching tarball of preloaded images
	I1124 14:00:06.803391  612215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:00:06.803462  612215 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:00:06.803474  612215 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:00:06.803574  612215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json ...
	I1124 14:00:06.803604  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json: {Name:mkafcf12b893460417f613b5956b061b507857b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:06.824406  612215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:00:06.824437  612215 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:00:06.824457  612215 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:00:06.824499  612215 start.go:360] acquireMachinesLock for auto-165759: {Name:mke2972eaae0a3077df79966ba25decc1725d099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:00:06.824601  612215 start.go:364] duration metric: took 79.565µs to acquireMachinesLock for "auto-165759"
	I1124 14:00:06.824623  612215 start.go:93] Provisioning new machine with config: &{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:06.824701  612215 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 24 13:59:55 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:55.45359724Z" level=info msg="Starting container: 9d2eb17d40318f0235a9e6d05ffd52ac8735080bc30283b96c5a8aa33a15c3bd" id=7651d3e7-f83b-4113-8297-4151583de41f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:55 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:55.45562073Z" level=info msg="Started container" PID=1828 containerID=9d2eb17d40318f0235a9e6d05ffd52ac8735080bc30283b96c5a8aa33a15c3bd description=kube-system/coredns-66bc5c9577-kzf7b/coredns id=7651d3e7-f83b-4113-8297-4151583de41f name=/runtime.v1.RuntimeService/StartContainer sandboxID=27c46bd709f108963909cff60b2ace08b9d344deb1771a97b3135e9eea34766e
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.408575927Z" level=info msg="Running pod sandbox: default/busybox/POD" id=0377bab3-54cb-48f1-b589-1c10ba753733 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.408660369Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.414572325Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9bc08edef0cbb0dad3095b929470f28ab1562a0861e127588262b7920b6dc1a7 UID:0cbb62f6-2583-44e5-8c7f-99a32975fb68 NetNS:/var/run/netns/4478eb91-977b-40cd-85d4-54989e30dae1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e458}] Aliases:map[]}"
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.414614454Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.426121743Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:9bc08edef0cbb0dad3095b929470f28ab1562a0861e127588262b7920b6dc1a7 UID:0cbb62f6-2583-44e5-8c7f-99a32975fb68 NetNS:/var/run/netns/4478eb91-977b-40cd-85d4-54989e30dae1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00027e458}] Aliases:map[]}"
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.426268874Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.427047937Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.428078528Z" level=info msg="Ran pod sandbox 9bc08edef0cbb0dad3095b929470f28ab1562a0861e127588262b7920b6dc1a7 with infra container: default/busybox/POD" id=0377bab3-54cb-48f1-b589-1c10ba753733 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.429352053Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0c98dea3-3930-4202-b443-91c841a44a62 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.429469336Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0c98dea3-3930-4202-b443-91c841a44a62 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.429505318Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=0c98dea3-3930-4202-b443-91c841a44a62 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.430363429Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=b9f52826-8655-4184-bd6a-03c621f75413 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:59:58 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:58.432115335Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.198835133Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=b9f52826-8655-4184-bd6a-03c621f75413 name=/runtime.v1.ImageService/PullImage
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.200147993Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=15136810-3344-4b0b-a525-794a7119cd3d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.205104688Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=7bd3e27a-19c7-432b-a553-ffb0446cb9ae name=/runtime.v1.ImageService/ImageStatus
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.208493294Z" level=info msg="Creating container: default/busybox/busybox" id=88de23c2-a485-42d0-84a0-cbb64e56155d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.208862724Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.213484568Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.214040782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.251801509Z" level=info msg="Created container 09bd85966fc393860204b2a385030cb209b56810b95e39861d4175cff2a521d0: default/busybox/busybox" id=88de23c2-a485-42d0-84a0-cbb64e56155d name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.252460129Z" level=info msg="Starting container: 09bd85966fc393860204b2a385030cb209b56810b95e39861d4175cff2a521d0" id=9cf6ccb3-db17-47f6-91b2-b4d58341527c name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 13:59:59 default-k8s-diff-port-098307 crio[773]: time="2025-11-24T13:59:59.254807701Z" level=info msg="Started container" PID=1904 containerID=09bd85966fc393860204b2a385030cb209b56810b95e39861d4175cff2a521d0 description=default/busybox/busybox id=9cf6ccb3-db17-47f6-91b2-b4d58341527c name=/runtime.v1.RuntimeService/StartContainer sandboxID=9bc08edef0cbb0dad3095b929470f28ab1562a0861e127588262b7920b6dc1a7
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	09bd85966fc39       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   9bc08edef0cbb       busybox                                                default
	9d2eb17d40318       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   27c46bd709f10       coredns-66bc5c9577-kzf7b                               kube-system
	ab880f6ff9533       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   980552eadc912       storage-provisioner                                    kube-system
	a228b1070b522       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   02185fd2008db       kube-proxy-8ck8x                                       kube-system
	186333f3c05be       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   9b72083d39997       kindnet-qswz4                                          kube-system
	5c519c4537373       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   2d4392d9f9e9a       kube-scheduler-default-k8s-diff-port-098307            kube-system
	7545509aa3152       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   ce31c4e786719       kube-controller-manager-default-k8s-diff-port-098307   kube-system
	0513c56b4b2e6       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   d297a5b6e9eaf       kube-apiserver-default-k8s-diff-port-098307            kube-system
	66aaedba458a2       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   5e5e9469301bb       etcd-default-k8s-diff-port-098307                      kube-system
	
	
	==> coredns [9d2eb17d40318f0235a9e6d05ffd52ac8735080bc30283b96c5a8aa33a15c3bd] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38417 - 50942 "HINFO IN 140160914340368146.1299297586544405439. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.491056313s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-098307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-098307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-098307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-098307
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 13:59:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 13:59:55 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 13:59:55 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 13:59:55 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 13:59:55 +0000   Mon, 24 Nov 2025 13:59:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-098307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                346f1d74-50ec-4327-a799-559dc98af4c4
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-kzf7b                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-default-k8s-diff-port-098307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-qswz4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-default-k8s-diff-port-098307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-098307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-8ck8x                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-default-k8s-diff-port-098307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s   node-controller  Node default-k8s-diff-port-098307 event: Registered Node default-k8s-diff-port-098307 in Controller
	  Normal  NodeReady                12s   kubelet          Node default-k8s-diff-port-098307 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [66aaedba458a2b456437e534b717ef975614d46b6202309c4c47493cb7662cc8] <==
	{"level":"warn","ts":"2025-11-24T13:59:34.318451Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41100","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.325726Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41108","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.333234Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.341945Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.348283Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41180","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.354849Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.361907Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.391975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41240","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.399649Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.406817Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:34.459263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41286","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T13:59:36.080086Z","caller":"traceutil/trace.go:172","msg":"trace[1620745047] linearizableReadLoop","detail":"{readStateIndex:73; appliedIndex:73; }","duration":"127.091821ms","start":"2025-11-24T13:59:35.952968Z","end":"2025-11-24T13:59:36.080060Z","steps":["trace[1620745047] 'read index received'  (duration: 127.085438ms)","trace[1620745047] 'applied index is now lower than readState.Index'  (duration: 5.353µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:36.082723Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"129.724081ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterroles/system:aggregate-to-edit\" limit:1 ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:59:36.082796Z","caller":"traceutil/trace.go:172","msg":"trace[823784899] range","detail":"{range_begin:/registry/clusterroles/system:aggregate-to-edit; range_end:; response_count:0; response_revision:69; }","duration":"129.817877ms","start":"2025-11-24T13:59:35.952966Z","end":"2025-11-24T13:59:36.082784Z","steps":["trace[823784899] 'agreement among raft nodes before linearized reading'  (duration: 127.178184ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:59:36.082823Z","caller":"traceutil/trace.go:172","msg":"trace[408800419] transaction","detail":"{read_only:false; response_revision:70; number_of_response:1; }","duration":"131.158276ms","start":"2025-11-24T13:59:35.951648Z","end":"2025-11-24T13:59:36.082806Z","steps":["trace[408800419] 'process raft request'  (duration: 128.440538ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:59:36.291906Z","caller":"traceutil/trace.go:172","msg":"trace[942278461] transaction","detail":"{read_only:false; response_revision:71; number_of_response:1; }","duration":"200.497953ms","start":"2025-11-24T13:59:36.091371Z","end":"2025-11-24T13:59:36.291869Z","steps":["trace[942278461] 'process raft request'  (duration: 200.399697ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:59:36.476501Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.77728ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790234264700904 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:discovery\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:discovery\" value_size:587 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-24T13:59:36.476603Z","caller":"traceutil/trace.go:172","msg":"trace[777918852] transaction","detail":"{read_only:false; response_revision:72; number_of_response:1; }","duration":"181.004298ms","start":"2025-11-24T13:59:36.295583Z","end":"2025-11-24T13:59:36.476587Z","steps":["trace[777918852] 'process raft request'  (duration: 54.773984ms)","trace[777918852] 'compare'  (duration: 125.691798ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:59:36.679493Z","caller":"traceutil/trace.go:172","msg":"trace[262330646] transaction","detail":"{read_only:false; response_revision:73; number_of_response:1; }","duration":"199.090359ms","start":"2025-11-24T13:59:36.480382Z","end":"2025-11-24T13:59:36.679473Z","steps":["trace[262330646] 'process raft request'  (duration: 122.575115ms)","trace[262330646] 'compare'  (duration: 76.391607ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T13:59:36.885372Z","caller":"traceutil/trace.go:172","msg":"trace[1780179759] transaction","detail":"{read_only:false; response_revision:76; number_of_response:1; }","duration":"189.984398ms","start":"2025-11-24T13:59:36.695376Z","end":"2025-11-24T13:59:36.885360Z","steps":["trace[1780179759] 'process raft request'  (duration: 189.901688ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T13:59:37.032707Z","caller":"traceutil/trace.go:172","msg":"trace[1963070824] transaction","detail":"{read_only:false; response_revision:77; number_of_response:1; }","duration":"143.946612ms","start":"2025-11-24T13:59:36.888747Z","end":"2025-11-24T13:59:37.032693Z","steps":["trace[1963070824] 'process raft request'  (duration: 85.953169ms)","trace[1963070824] 'compare'  (duration: 57.912876ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:37.358087Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"178.080675ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:4"}
	{"level":"info","ts":"2025-11-24T13:59:37.358139Z","caller":"traceutil/trace.go:172","msg":"trace[1555506763] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:80; }","duration":"178.140896ms","start":"2025-11-24T13:59:37.179984Z","end":"2025-11-24T13:59:37.358125Z","steps":["trace[1555506763] 'agreement among raft nodes before linearized reading'  (duration: 55.336223ms)","trace[1555506763] 'range keys from in-memory index tree'  (duration: 122.721393ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:37.358161Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"122.768448ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873790234264700922 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/clusterroles/system:aggregate-to-view\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterroles/system:aggregate-to-view\" value_size:1962 >> failure:<>>","response":"size:14"}
	{"level":"info","ts":"2025-11-24T13:59:37.358216Z","caller":"traceutil/trace.go:172","msg":"trace[2083563092] transaction","detail":"{read_only:false; response_revision:81; number_of_response:1; }","duration":"246.869448ms","start":"2025-11-24T13:59:37.111336Z","end":"2025-11-24T13:59:37.358205Z","steps":["trace[2083563092] 'process raft request'  (duration: 124.01681ms)","trace[2083563092] 'compare'  (duration: 122.683349ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:00:07 up  2:42,  0 user,  load average: 2.82, 2.93, 2.04
	Linux default-k8s-diff-port-098307 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [186333f3c05be9a884497b0d83baeec209474103309eadba07a058079989dff1] <==
	I1124 13:59:44.623311       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:59:44.623510       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 13:59:44.623621       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:59:44.623636       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:59:44.623644       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:59:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:59:44.921558       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:59:44.921584       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:59:44.921595       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:59:44.921766       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 13:59:45.221658       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:59:45.221685       1 metrics.go:72] Registering metrics
	I1124 13:59:45.221746       1 controller.go:711] "Syncing nftables rules"
	I1124 13:59:54.924010       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 13:59:54.924151       1 main.go:301] handling current node
	I1124 14:00:04.921585       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 14:00:04.921626       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0513c56b4b2e6b8750d2e6779c297f919de45ab4186a91be72edb970f8b92b5a] <==
	I1124 13:59:35.004246       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:59:35.008976       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1124 13:59:35.034142       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1124 13:59:35.070306       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1124 13:59:35.080059       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:59:35.349698       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:59:35.949095       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:59:36.083582       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:59:36.083602       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:59:37.756753       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:59:37.792628       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:59:37.886326       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:59:37.892105       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1124 13:59:37.893023       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:59:37.897336       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:59:37.938796       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:59:38.891540       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:59:38.899524       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:59:38.906458       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:59:43.641450       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:59:43.842375       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:43.845845       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:44.042308       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1124 13:59:44.042308       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 14:00:06.211467       1 conn.go:339] Error on socket receive: read tcp 192.168.103.2:8444->192.168.103.1:38604: use of closed network connection
	
	
	==> kube-controller-manager [7545509aa315212688a807b070b993a1416a54f1b6ac86fb44e2e11eed19a386] <==
	I1124 13:59:42.919626       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1124 13:59:42.937677       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 13:59:42.937721       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:59:42.937749       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:59:42.937836       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 13:59:42.937841       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 13:59:42.937936       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 13:59:42.938079       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:59:42.939012       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:59:42.939047       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 13:59:42.939049       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:59:42.939191       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:59:42.939235       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 13:59:42.940391       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:59:42.942670       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 13:59:42.942693       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 13:59:42.942740       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 13:59:42.942769       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 13:59:42.942777       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 13:59:42.942784       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 13:59:42.943830       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:59:42.948829       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-098307" podCIDRs=["10.244.0.0/24"]
	I1124 13:59:42.961997       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:59:42.966122       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:59:57.890084       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [a228b1070b5229b612bdce8c1f88b9b1511f80ba27b5757323f547469871586a] <==
	I1124 13:59:44.467251       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:59:44.529166       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:59:44.629882       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:59:44.629938       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 13:59:44.630032       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:59:44.650970       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:59:44.651027       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:59:44.656479       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:59:44.656966       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:59:44.656993       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:59:44.658666       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:59:44.658683       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:59:44.658704       1 config.go:200] "Starting service config controller"
	I1124 13:59:44.658709       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:59:44.658728       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:59:44.658733       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:59:44.658790       1 config.go:309] "Starting node config controller"
	I1124 13:59:44.658934       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:59:44.659016       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:59:44.759880       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:59:44.759945       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:59:44.759974       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [5c519c4537373fd1c9f49ba821848258ab3b626094e729104684dfc230b399ba] <==
	E1124 13:59:35.137709       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:59:35.138002       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:59:35.138112       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:59:35.138250       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:59:35.137885       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:59:35.947550       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:59:35.962548       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:59:35.998549       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1124 13:59:36.001516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1124 13:59:36.175752       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:59:36.195881       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:59:36.252379       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:59:36.273425       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:59:36.296417       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:59:36.381880       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:59:36.382093       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:59:36.423950       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:59:36.430340       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:59:36.430657       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:59:36.477134       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:59:36.549262       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:59:36.662997       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:59:36.667852       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:59:36.689787       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	I1124 13:59:39.225945       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:59:39 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:39.771233    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-098307" podStartSLOduration=1.771216145 podStartE2EDuration="1.771216145s" podCreationTimestamp="2025-11-24 13:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:39.761108334 +0000 UTC m=+1.118923984" watchObservedRunningTime="2025-11-24 13:59:39.771216145 +0000 UTC m=+1.129031795"
	Nov 24 13:59:39 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:39.771387    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-098307" podStartSLOduration=1.771377746 podStartE2EDuration="1.771377746s" podCreationTimestamp="2025-11-24 13:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:39.77110798 +0000 UTC m=+1.128923638" watchObservedRunningTime="2025-11-24 13:59:39.771377746 +0000 UTC m=+1.129193396"
	Nov 24 13:59:39 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:39.782293    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-098307" podStartSLOduration=1.782275818 podStartE2EDuration="1.782275818s" podCreationTimestamp="2025-11-24 13:59:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:39.781988042 +0000 UTC m=+1.139803693" watchObservedRunningTime="2025-11-24 13:59:39.782275818 +0000 UTC m=+1.140091469"
	Nov 24 13:59:39 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:39.803294    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-098307" podStartSLOduration=2.8032753809999997 podStartE2EDuration="2.803275381s" podCreationTimestamp="2025-11-24 13:59:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:39.794268801 +0000 UTC m=+1.152084443" watchObservedRunningTime="2025-11-24 13:59:39.803275381 +0000 UTC m=+1.161091035"
	Nov 24 13:59:43 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:43.031754    1310 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 13:59:43 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:43.032407    1310 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150049    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vj2q8\" (UniqueName: \"kubernetes.io/projected/fbe0bd8e-039a-4627-a3f3-0473fbede882-kube-api-access-vj2q8\") pod \"kindnet-qswz4\" (UID: \"fbe0bd8e-039a-4627-a3f3-0473fbede882\") " pod="kube-system/kindnet-qswz4"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150108    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/db00c3e7-d1f0-464f-96a6-c07b36b62e44-kube-proxy\") pod \"kube-proxy-8ck8x\" (UID: \"db00c3e7-d1f0-464f-96a6-c07b36b62e44\") " pod="kube-system/kube-proxy-8ck8x"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150136    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/fbe0bd8e-039a-4627-a3f3-0473fbede882-cni-cfg\") pod \"kindnet-qswz4\" (UID: \"fbe0bd8e-039a-4627-a3f3-0473fbede882\") " pod="kube-system/kindnet-qswz4"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150157    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fbe0bd8e-039a-4627-a3f3-0473fbede882-xtables-lock\") pod \"kindnet-qswz4\" (UID: \"fbe0bd8e-039a-4627-a3f3-0473fbede882\") " pod="kube-system/kindnet-qswz4"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150179    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db00c3e7-d1f0-464f-96a6-c07b36b62e44-xtables-lock\") pod \"kube-proxy-8ck8x\" (UID: \"db00c3e7-d1f0-464f-96a6-c07b36b62e44\") " pod="kube-system/kube-proxy-8ck8x"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150198    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db00c3e7-d1f0-464f-96a6-c07b36b62e44-lib-modules\") pod \"kube-proxy-8ck8x\" (UID: \"db00c3e7-d1f0-464f-96a6-c07b36b62e44\") " pod="kube-system/kube-proxy-8ck8x"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150219    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fbe0bd8e-039a-4627-a3f3-0473fbede882-lib-modules\") pod \"kindnet-qswz4\" (UID: \"fbe0bd8e-039a-4627-a3f3-0473fbede882\") " pod="kube-system/kindnet-qswz4"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.150246    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn7dg\" (UniqueName: \"kubernetes.io/projected/db00c3e7-d1f0-464f-96a6-c07b36b62e44-kube-api-access-sn7dg\") pod \"kube-proxy-8ck8x\" (UID: \"db00c3e7-d1f0-464f-96a6-c07b36b62e44\") " pod="kube-system/kube-proxy-8ck8x"
	Nov 24 13:59:44 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:44.766703    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8ck8x" podStartSLOduration=0.766683827 podStartE2EDuration="766.683827ms" podCreationTimestamp="2025-11-24 13:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:44.76614745 +0000 UTC m=+6.123963103" watchObservedRunningTime="2025-11-24 13:59:44.766683827 +0000 UTC m=+6.124499478"
	Nov 24 13:59:48 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:48.532510    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-qswz4" podStartSLOduration=4.532450166 podStartE2EDuration="4.532450166s" podCreationTimestamp="2025-11-24 13:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:44.787629541 +0000 UTC m=+6.145445192" watchObservedRunningTime="2025-11-24 13:59:48.532450166 +0000 UTC m=+9.890265818"
	Nov 24 13:59:55 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:55.074310    1310 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 13:59:55 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:55.124148    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a643779f-2e4f-4d3d-8b0f-aae3ee559a69-config-volume\") pod \"coredns-66bc5c9577-kzf7b\" (UID: \"a643779f-2e4f-4d3d-8b0f-aae3ee559a69\") " pod="kube-system/coredns-66bc5c9577-kzf7b"
	Nov 24 13:59:55 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:55.124206    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/f1b9a947-fc26-4356-a262-f3f74752b010-tmp\") pod \"storage-provisioner\" (UID: \"f1b9a947-fc26-4356-a262-f3f74752b010\") " pod="kube-system/storage-provisioner"
	Nov 24 13:59:55 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:55.124231    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6259f\" (UniqueName: \"kubernetes.io/projected/a643779f-2e4f-4d3d-8b0f-aae3ee559a69-kube-api-access-6259f\") pod \"coredns-66bc5c9577-kzf7b\" (UID: \"a643779f-2e4f-4d3d-8b0f-aae3ee559a69\") " pod="kube-system/coredns-66bc5c9577-kzf7b"
	Nov 24 13:59:55 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:55.124260    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9jxp\" (UniqueName: \"kubernetes.io/projected/f1b9a947-fc26-4356-a262-f3f74752b010-kube-api-access-z9jxp\") pod \"storage-provisioner\" (UID: \"f1b9a947-fc26-4356-a262-f3f74752b010\") " pod="kube-system/storage-provisioner"
	Nov 24 13:59:55 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:55.789501    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=11.789477503 podStartE2EDuration="11.789477503s" podCreationTimestamp="2025-11-24 13:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:55.789229816 +0000 UTC m=+17.147045470" watchObservedRunningTime="2025-11-24 13:59:55.789477503 +0000 UTC m=+17.147293155"
	Nov 24 13:59:58 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:58.102369    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-kzf7b" podStartSLOduration=14.102343994 podStartE2EDuration="14.102343994s" podCreationTimestamp="2025-11-24 13:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:55.798317507 +0000 UTC m=+17.156133158" watchObservedRunningTime="2025-11-24 13:59:58.102343994 +0000 UTC m=+19.460159644"
	Nov 24 13:59:58 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:58.145342    1310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lm5m\" (UniqueName: \"kubernetes.io/projected/0cbb62f6-2583-44e5-8c7f-99a32975fb68-kube-api-access-9lm5m\") pod \"busybox\" (UID: \"0cbb62f6-2583-44e5-8c7f-99a32975fb68\") " pod="default/busybox"
	Nov 24 13:59:59 default-k8s-diff-port-098307 kubelet[1310]: I1124 13:59:59.808803    1310 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.03703203 podStartE2EDuration="1.808443474s" podCreationTimestamp="2025-11-24 13:59:58 +0000 UTC" firstStartedPulling="2025-11-24 13:59:58.429854352 +0000 UTC m=+19.787669985" lastFinishedPulling="2025-11-24 13:59:59.201265794 +0000 UTC m=+20.559081429" observedRunningTime="2025-11-24 13:59:59.805869797 +0000 UTC m=+21.163685448" watchObservedRunningTime="2025-11-24 13:59:59.808443474 +0000 UTC m=+21.166259125"
	
	
	==> storage-provisioner [ab880f6ff95336323b1dee22773244816d12f757cb74fa0efdb6785ddf27177a] <==
	I1124 13:59:55.466058       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 13:59:55.474084       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 13:59:55.474159       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 13:59:55.476159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:59:55.481121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:59:55.481249       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 13:59:55.481491       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-098307_ea8cf367-3c71-4ce7-91ff-b2019d5e65ea!
	I1124 13:59:55.481931       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81dbd621-c84a-49e1-bca9-3457968fc43a", APIVersion:"v1", ResourceVersion:"408", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-098307_ea8cf367-3c71-4ce7-91ff-b2019d5e65ea became leader
	W1124 13:59:55.483878       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:59:55.486711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 13:59:55.582159       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-098307_ea8cf367-3c71-4ce7-91ff-b2019d5e65ea!
	W1124 13:59:57.489589       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:59:57.493374       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:59:59.496467       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 13:59:59.500109       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:01.503672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:01.508004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:03.510371       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:03.514176       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:05.517773       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:05.522303       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:07.525877       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:07.531146       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-456660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-456660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (271.081212ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:13Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-456660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-456660 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-456660 describe deploy/metrics-server -n kube-system: exit status 1 (65.009911ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-456660 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-456660
helpers_test.go:243: (dbg) docker inspect embed-certs-456660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73",
	        "Created": "2025-11-24T13:59:02.932884414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 593534,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T13:59:02.979623483Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/hostname",
	        "HostsPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/hosts",
	        "LogPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73-json.log",
	        "Name": "/embed-certs-456660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-456660:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-456660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73",
	                "LowerDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-456660",
	                "Source": "/var/lib/docker/volumes/embed-certs-456660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-456660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-456660",
	                "name.minikube.sigs.k8s.io": "embed-certs-456660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "0b7ce2183b9278a829b501c2f6f566b22914a362a20bfb3d6689414486b8b224",
	            "SandboxKey": "/var/run/docker/netns/0b7ce2183b92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33448"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33449"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33452"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33450"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33451"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-456660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95ddebcd3d89852aa68144f21da1b1af75512bc90f1d459df2c763b06d58452c",
	                    "EndpointID": "4b64d39c5d1c4998fb641593d0b59a3b45ebb273673c3241c3f17484f89e0ef4",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "be:28:b7:e0:1b:ef",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-456660",
	                        "387e2d09bc80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-456660 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-456660 logs -n 25: (1.300738851s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ delete  │ -p cert-expiration-107341                                                                                                                                                                                                                     │ cert-expiration-107341       │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 13:58 UTC │
	│ start   │ -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 13:58 UTC │ 24 Nov 25 14:00 UTC │
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p newest-cni-305966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ delete  │ -p kubernetes-upgrade-061040                                                                                                                                                                                                                  │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-305966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ start   │ -p auto-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-098307 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-456660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:00:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:00:06.639955  612215 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:00:06.640075  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640086  612215 out.go:374] Setting ErrFile to fd 2...
	I1124 14:00:06.640093  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640294  612215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:00:06.640720  612215 out.go:368] Setting JSON to false
	I1124 14:00:06.641938  612215 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9754,"bootTime":1763983053,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:00:06.641994  612215 start.go:143] virtualization: kvm guest
	I1124 14:00:06.643737  612215 out.go:179] * [auto-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:00:06.644898  612215 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:00:06.644920  612215 notify.go:221] Checking for updates...
	I1124 14:00:06.647963  612215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:00:06.648994  612215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:06.650029  612215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:00:06.651162  612215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:00:06.652296  612215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:00:06.653986  612215 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654117  612215 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654273  612215 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654410  612215 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:00:06.678126  612215 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:00:06.678224  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.738636  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.729329331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.738783  612215 docker.go:319] overlay module found
	I1124 14:00:06.740540  612215 out.go:179] * Using the docker driver based on user configuration
	I1124 14:00:06.741585  612215 start.go:309] selected driver: docker
	I1124 14:00:06.741600  612215 start.go:927] validating driver "docker" against <nil>
	I1124 14:00:06.741610  612215 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:00:06.742172  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.795665  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.785850803 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.795856  612215 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:00:06.796102  612215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:00:06.797705  612215 out.go:179] * Using Docker driver with root privileges
	I1124 14:00:06.798793  612215 cni.go:84] Creating CNI manager for ""
	I1124 14:00:06.798864  612215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:06.798878  612215 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:00:06.798993  612215 start.go:353] cluster config:
	{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 14:00:06.800311  612215 out.go:179] * Starting "auto-165759" primary control-plane node in "auto-165759" cluster
	I1124 14:00:06.801369  612215 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:00:06.802406  612215 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:00:06.803315  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:06.803344  612215 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:00:06.803357  612215 cache.go:65] Caching tarball of preloaded images
	I1124 14:00:06.803391  612215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:00:06.803462  612215 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:00:06.803474  612215 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:00:06.803574  612215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json ...
	I1124 14:00:06.803604  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json: {Name:mkafcf12b893460417f613b5956b061b507857b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:06.824406  612215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:00:06.824437  612215 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:00:06.824457  612215 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:00:06.824499  612215 start.go:360] acquireMachinesLock for auto-165759: {Name:mke2972eaae0a3077df79966ba25decc1725d099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:00:06.824601  612215 start.go:364] duration metric: took 79.565µs to acquireMachinesLock for "auto-165759"
	I1124 14:00:06.824623  612215 start.go:93] Provisioning new machine with config: &{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:06.824701  612215 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:00:05.594487  611344 out.go:252] * Restarting existing docker container for "newest-cni-305966" ...
	I1124 14:00:05.594567  611344 cli_runner.go:164] Run: docker start newest-cni-305966
	I1124 14:00:06.042778  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:06.065571  611344 kic.go:430] container "newest-cni-305966" state is running.
	I1124 14:00:06.066040  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:06.086933  611344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/config.json ...
	I1124 14:00:06.087112  611344 machine.go:94] provisionDockerMachine start ...
	I1124 14:00:06.087177  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:06.106639  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:06.106975  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:06.106997  611344 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:00:06.107857  611344 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50272->127.0.0.1:33463: read: connection reset by peer
	I1124 14:00:09.250576  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-305966
	
	I1124 14:00:09.250635  611344 ubuntu.go:182] provisioning hostname "newest-cni-305966"
	I1124 14:00:09.250734  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.269624  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.269963  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.269989  611344 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-305966 && echo "newest-cni-305966" | sudo tee /etc/hostname
	I1124 14:00:09.423943  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-305966
	
	I1124 14:00:09.424043  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.442270  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.442573  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.442602  611344 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-305966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-305966/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-305966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:00:09.591505  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:00:09.591540  611344 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 14:00:09.591580  611344 ubuntu.go:190] setting up certificates
	I1124 14:00:09.591606  611344 provision.go:84] configureAuth start
	I1124 14:00:09.591678  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:09.608000  611344 provision.go:143] copyHostCerts
	I1124 14:00:09.608076  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 14:00:09.608092  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 14:00:09.608157  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 14:00:09.608283  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 14:00:09.608294  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 14:00:09.608331  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 14:00:09.608412  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 14:00:09.608421  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 14:00:09.608458  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 14:00:09.608524  611344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-305966 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-305966]
	I1124 14:00:09.795774  611344 provision.go:177] copyRemoteCerts
	I1124 14:00:09.795836  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:00:09.795882  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.815167  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:09.919696  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:00:09.937539  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:00:09.955598  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:00:09.972859  611344 provision.go:87] duration metric: took 381.236357ms to configureAuth
	I1124 14:00:09.972886  611344 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:00:09.973062  611344 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:09.973189  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.990437  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.990698  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.990720  611344 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:00:06.826428  612215 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:00:06.826727  612215 start.go:159] libmachine.API.Create for "auto-165759" (driver="docker")
	I1124 14:00:06.826765  612215 client.go:173] LocalClient.Create starting
	I1124 14:00:06.826856  612215 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 14:00:06.826912  612215 main.go:143] libmachine: Decoding PEM data...
	I1124 14:00:06.826942  612215 main.go:143] libmachine: Parsing certificate...
	I1124 14:00:06.827035  612215 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 14:00:06.827066  612215 main.go:143] libmachine: Decoding PEM data...
	I1124 14:00:06.827081  612215 main.go:143] libmachine: Parsing certificate...
	I1124 14:00:06.827525  612215 cli_runner.go:164] Run: docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:00:06.846848  612215 cli_runner.go:211] docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:00:06.846935  612215 network_create.go:284] running [docker network inspect auto-165759] to gather additional debugging logs...
	I1124 14:00:06.846959  612215 cli_runner.go:164] Run: docker network inspect auto-165759
	W1124 14:00:06.864846  612215 cli_runner.go:211] docker network inspect auto-165759 returned with exit code 1
	I1124 14:00:06.864876  612215 network_create.go:287] error running [docker network inspect auto-165759]: docker network inspect auto-165759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-165759 not found
	I1124 14:00:06.864905  612215 network_create.go:289] output of [docker network inspect auto-165759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-165759 not found
	
	** /stderr **
	I1124 14:00:06.865080  612215 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:06.883080  612215 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 14:00:06.884122  612215 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 14:00:06.884636  612215 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 14:00:06.885473  612215 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea31e0}
	I1124 14:00:06.885501  612215 network_create.go:124] attempt to create docker network auto-165759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:00:06.885543  612215 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-165759 auto-165759
	I1124 14:00:06.935118  612215 network_create.go:108] docker network auto-165759 192.168.76.0/24 created
	I1124 14:00:06.935167  612215 kic.go:121] calculated static IP "192.168.76.2" for the "auto-165759" container
	I1124 14:00:06.935270  612215 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:00:06.954577  612215 cli_runner.go:164] Run: docker volume create auto-165759 --label name.minikube.sigs.k8s.io=auto-165759 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:00:06.973699  612215 oci.go:103] Successfully created a docker volume auto-165759
	I1124 14:00:06.973773  612215 cli_runner.go:164] Run: docker run --rm --name auto-165759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-165759 --entrypoint /usr/bin/test -v auto-165759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:00:07.370078  612215 oci.go:107] Successfully prepared a docker volume auto-165759
	I1124 14:00:07.370137  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:07.370149  612215 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:00:07.370222  612215 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:00:11.272332  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:00:11.272357  611344 machine.go:97] duration metric: took 5.185230585s to provisionDockerMachine
	I1124 14:00:11.272370  611344 start.go:293] postStartSetup for "newest-cni-305966" (driver="docker")
	I1124 14:00:11.272381  611344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:00:11.272443  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:00:11.272503  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.289956  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.390985  611344 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:00:11.394433  611344 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:00:11.394458  611344 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:00:11.394469  611344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 14:00:11.394523  611344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 14:00:11.394620  611344 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 14:00:11.394726  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:00:11.402145  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:11.419357  611344 start.go:296] duration metric: took 146.974942ms for postStartSetup
	I1124 14:00:11.419428  611344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:00:11.419465  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.436677  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.536483  611344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:00:11.541073  611344 fix.go:56] duration metric: took 5.96583724s for fixHost
	I1124 14:00:11.541099  611344 start.go:83] releasing machines lock for "newest-cni-305966", held for 5.965887021s
	I1124 14:00:11.541180  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:11.558308  611344 ssh_runner.go:195] Run: cat /version.json
	I1124 14:00:11.558360  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.558426  611344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:00:11.558516  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.575458  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.576178  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.671465  611344 ssh_runner.go:195] Run: systemctl --version
	I1124 14:00:11.727243  611344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:00:11.759992  611344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:00:11.764307  611344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:00:11.764355  611344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:00:11.771850  611344 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:00:11.771871  611344 start.go:496] detecting cgroup driver to use...
	I1124 14:00:11.771914  611344 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 14:00:11.771962  611344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:00:11.785647  611344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:00:11.798863  611344 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:00:11.798925  611344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:00:11.815812  611344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:00:11.830569  611344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:00:11.926016  611344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:00:12.019537  611344 docker.go:234] disabling docker service ...
	I1124 14:00:12.019605  611344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:00:12.035763  611344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:00:12.048780  611344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:00:12.132824  611344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:00:12.228446  611344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:00:12.241308  611344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:00:12.255879  611344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:00:12.256027  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.265416  611344 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 14:00:12.265479  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.274340  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.282705  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.294801  611344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:00:12.303924  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.312281  611344 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.319919  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.327933  611344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:00:12.334789  611344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:00:12.341614  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:12.455849  611344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:00:12.614518  611344 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:00:12.614593  611344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:00:12.619358  611344 start.go:564] Will wait 60s for crictl version
	I1124 14:00:12.619421  611344 ssh_runner.go:195] Run: which crictl
	I1124 14:00:12.623479  611344 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:00:12.648222  611344 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:00:12.648299  611344 ssh_runner.go:195] Run: crio --version
	I1124 14:00:12.677057  611344 ssh_runner.go:195] Run: crio --version
	I1124 14:00:12.706583  611344 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:00:12.707690  611344 cli_runner.go:164] Run: docker network inspect newest-cni-305966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:12.727532  611344 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 14:00:12.731833  611344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:12.744130  611344 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:00:12.745252  611344 kubeadm.go:884] updating cluster {Name:newest-cni-305966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:00:12.745403  611344 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:12.745468  611344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:12.779183  611344 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:12.779205  611344 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:00:12.779259  611344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:12.806328  611344 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:12.806352  611344 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:00:12.806360  611344 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 14:00:12.806482  611344 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-305966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:00:12.806572  611344 ssh_runner.go:195] Run: crio config
	I1124 14:00:12.854458  611344 cni.go:84] Creating CNI manager for ""
	I1124 14:00:12.854479  611344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:12.854496  611344 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:00:12.854518  611344 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-305966 NodeName:newest-cni-305966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:00:12.854638  611344 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-305966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:00:12.854692  611344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:00:12.862451  611344 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:00:12.862531  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:00:12.869871  611344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:00:12.882752  611344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:00:12.897294  611344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 14:00:12.910632  611344 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:00:12.914337  611344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:12.923870  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:13.008239  611344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:13.038363  611344 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966 for IP: 192.168.94.2
	I1124 14:00:13.038387  611344 certs.go:195] generating shared ca certs ...
	I1124 14:00:13.038406  611344 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.038582  611344 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 14:00:13.038637  611344 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 14:00:13.038650  611344 certs.go:257] generating profile certs ...
	I1124 14:00:13.038762  611344 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/client.key
	I1124 14:00:13.038836  611344 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.key.707ba182
	I1124 14:00:13.038907  611344 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.key
	I1124 14:00:13.039052  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 14:00:13.039096  611344 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 14:00:13.039108  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:00:13.039141  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:00:13.039174  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:00:13.039205  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 14:00:13.039265  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:13.040322  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:00:13.063001  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:00:13.085566  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:00:13.106652  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 14:00:13.127998  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:00:13.152231  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:00:13.171960  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:00:13.189487  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:00:13.207088  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:00:13.224385  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 14:00:13.241576  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 14:00:13.259033  611344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:00:13.270834  611344 ssh_runner.go:195] Run: openssl version
	I1124 14:00:13.276982  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:00:13.284965  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.288421  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.288470  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.327459  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:00:13.336104  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 14:00:13.345465  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.350000  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.350052  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.394543  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 14:00:13.403198  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 14:00:13.413018  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.417473  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.417525  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.459516  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:00:13.467973  611344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:00:13.471733  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:00:13.515058  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:00:13.559630  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:00:13.610038  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:00:13.659104  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:00:13.720195  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:00:13.780380  611344 kubeadm.go:401] StartCluster: {Name:newest-cni-305966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:00:13.780515  611344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:00:13.780595  611344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:00:13.818024  611344 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:13.818049  611344 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:13.818054  611344 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:13.818058  611344 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:13.818062  611344 cri.go:89] found id: ""
	I1124 14:00:13.818103  611344 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:00:13.833826  611344 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:13Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:00:13.833927  611344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:00:13.848758  611344 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:00:13.848777  611344 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:00:13.848821  611344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:00:13.860299  611344 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:00:13.861443  611344 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-305966" does not appear in /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:13.862296  611344 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-348000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-305966" cluster setting kubeconfig missing "newest-cni-305966" context setting]
	I1124 14:00:13.863520  611344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.865937  611344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:00:13.878751  611344 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 14:00:13.878798  611344 kubeadm.go:602] duration metric: took 30.013528ms to restartPrimaryControlPlane
	I1124 14:00:13.878809  611344 kubeadm.go:403] duration metric: took 98.440465ms to StartCluster
	I1124 14:00:13.878826  611344 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.878902  611344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:13.881473  611344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.881833  611344 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:13.882038  611344 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:00:13.882140  611344 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-305966"
	I1124 14:00:13.882152  611344 addons.go:70] Setting dashboard=true in profile "newest-cni-305966"
	I1124 14:00:13.882187  611344 addons.go:239] Setting addon dashboard=true in "newest-cni-305966"
	W1124 14:00:13.882200  611344 addons.go:248] addon dashboard should already be in state true
	I1124 14:00:13.882233  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.882383  611344 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:13.882737  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.882159  611344 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-305966"
	W1124 14:00:13.882915  611344 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:00:13.882963  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.882169  611344 addons.go:70] Setting default-storageclass=true in profile "newest-cni-305966"
	I1124 14:00:13.883343  611344 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-305966"
	I1124 14:00:13.883464  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.883734  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.887025  611344 out.go:179] * Verifying Kubernetes components...
	I1124 14:00:13.888266  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:13.912373  611344 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:00:13.913625  611344 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:13.913675  611344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:00:13.913739  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.917705  611344 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:00:13.918797  611344 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	
	
	==> CRI-O <==
	Nov 24 14:00:02 embed-certs-456660 crio[781]: time="2025-11-24T14:00:02.904123999Z" level=info msg="Starting container: 317d56e75a7e9493a4cad2f1c1b5469c6316bc107c2cfc767d684e093faf113a" id=e4c3fa2b-6054-43da-817b-f1387301a300 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:02 embed-certs-456660 crio[781]: time="2025-11-24T14:00:02.906589254Z" level=info msg="Started container" PID=1835 containerID=317d56e75a7e9493a4cad2f1c1b5469c6316bc107c2cfc767d684e093faf113a description=kube-system/coredns-66bc5c9577-nnp2c/coredns id=e4c3fa2b-6054-43da-817b-f1387301a300 name=/runtime.v1.RuntimeService/StartContainer sandboxID=d66a41960baa579f5e3edac113282140786b9c690eb0d8c8adac2d1a1373543a
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.301523782Z" level=info msg="Running pod sandbox: default/busybox/POD" id=b4348450-62cd-4418-abf3-2e50cef45dd6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.301597096Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.306241706Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7c10ec70125b8e4c56e5dd6caa08049eb2b655a9b6800e294bf5a17c5573d9e2 UID:de501807-9ee9-4a20-982b-0c68a8f2a4a7 NetNS:/var/run/netns/2f6ed552-a317-4cd8-8fa0-d6bb546f984c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aee0}] Aliases:map[]}"
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.306278283Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.315969325Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:7c10ec70125b8e4c56e5dd6caa08049eb2b655a9b6800e294bf5a17c5573d9e2 UID:de501807-9ee9-4a20-982b-0c68a8f2a4a7 NetNS:/var/run/netns/2f6ed552-a317-4cd8-8fa0-d6bb546f984c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00008aee0}] Aliases:map[]}"
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.316090718Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.316793894Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.317581831Z" level=info msg="Ran pod sandbox 7c10ec70125b8e4c56e5dd6caa08049eb2b655a9b6800e294bf5a17c5573d9e2 with infra container: default/busybox/POD" id=b4348450-62cd-4418-abf3-2e50cef45dd6 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.31878397Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c826ed5c-2ca9-431a-9221-9437450c22dd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.318928131Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c826ed5c-2ca9-431a-9221-9437450c22dd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.318971927Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=c826ed5c-2ca9-431a-9221-9437450c22dd name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.319693684Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ca70f864-0f63-4da3-8cbc-e680491b0d00 name=/runtime.v1.ImageService/PullImage
	Nov 24 14:00:05 embed-certs-456660 crio[781]: time="2025-11-24T14:00:05.321178129Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.118374413Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=ca70f864-0f63-4da3-8cbc-e680491b0d00 name=/runtime.v1.ImageService/PullImage
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.119865174Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=17934e1d-edc2-43ea-aa38-92f410077478 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.121527393Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=897a60e1-3257-4590-8aff-5c6dbb67959f name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.125709976Z" level=info msg="Creating container: default/busybox/busybox" id=97f48da5-c23d-4964-b995-91b8d70a7944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.125860637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.130944289Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.131928726Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.186773145Z" level=info msg="Created container c4489711f08dd885c0aae412885b28e6c9990cc69dcdf68353fd4258a95188e3: default/busybox/busybox" id=97f48da5-c23d-4964-b995-91b8d70a7944 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.188073457Z" level=info msg="Starting container: c4489711f08dd885c0aae412885b28e6c9990cc69dcdf68353fd4258a95188e3" id=897db4ac-ba49-47d0-bcd7-237ea0f0e64f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:06 embed-certs-456660 crio[781]: time="2025-11-24T14:00:06.190292864Z" level=info msg="Started container" PID=1913 containerID=c4489711f08dd885c0aae412885b28e6c9990cc69dcdf68353fd4258a95188e3 description=default/busybox/busybox id=897db4ac-ba49-47d0-bcd7-237ea0f0e64f name=/runtime.v1.RuntimeService/StartContainer sandboxID=7c10ec70125b8e4c56e5dd6caa08049eb2b655a9b6800e294bf5a17c5573d9e2
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	c4489711f08dd       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago        Running             busybox                   0                   7c10ec70125b8       busybox                                      default
	317d56e75a7e9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago       Running             coredns                   0                   d66a41960baa5       coredns-66bc5c9577-nnp2c                     kube-system
	fcaf3acb1e30e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago       Running             storage-provisioner       0                   3c2fb032759d2       storage-provisioner                          kube-system
	532dafa2c9479       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      52 seconds ago       Running             kube-proxy                0                   78b64626389f0       kube-proxy-k5bxk                             kube-system
	05a0146f4a987       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      52 seconds ago       Running             kindnet-cni               0                   bd5239dec2ff8       kindnet-vlqg6                                kube-system
	586a7cdde2066       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      About a minute ago   Running             kube-apiserver            0                   9d9d6a5d54aba       kube-apiserver-embed-certs-456660            kube-system
	5978d9c9c9c38       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      About a minute ago   Running             etcd                      0                   3d89b7c189ec5       etcd-embed-certs-456660                      kube-system
	025bbe0817f7b       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      About a minute ago   Running             kube-controller-manager   0                   dbd92b3a0df40       kube-controller-manager-embed-certs-456660   kube-system
	4b5d2a2bea15d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      About a minute ago   Running             kube-scheduler            0                   6b43bbe508c94       kube-scheduler-embed-certs-456660            kube-system
	
	
	==> coredns [317d56e75a7e9493a4cad2f1c1b5469c6316bc107c2cfc767d684e093faf113a] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:57172 - 57732 "HINFO IN 3735079604729961286.1565529449348014442. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.110600946s
	
	
	==> describe nodes <==
	Name:               embed-certs-456660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-456660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-456660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-456660
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:00:06 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:00:02 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:00:02 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:00:02 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:00:02 +0000   Mon, 24 Nov 2025 14:00:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-456660
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                950f3d12-76ba-49d9-8f39-c1dd2a09eea1
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-nnp2c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     53s
	  kube-system                 etcd-embed-certs-456660                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         59s
	  kube-system                 kindnet-vlqg6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      53s
	  kube-system                 kube-apiserver-embed-certs-456660             250m (3%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-controller-manager-embed-certs-456660    200m (2%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 kube-proxy-k5bxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kube-system                 kube-scheduler-embed-certs-456660             100m (1%)     0 (0%)      0 (0%)           0 (0%)         59s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 52s   kube-proxy       
	  Normal  Starting                 59s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s   kubelet          Node embed-certs-456660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s   kubelet          Node embed-certs-456660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s   kubelet          Node embed-certs-456660 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s   node-controller  Node embed-certs-456660 event: Registered Node embed-certs-456660 in Controller
	  Normal  NodeReady                12s   kubelet          Node embed-certs-456660 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [5978d9c9c9c38465863810f68bce79488a332e254015b84ae563ea8e1bf7b6f3] <==
	{"level":"warn","ts":"2025-11-24T13:59:12.920307Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52666","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.927121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52680","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.935512Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52690","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.942654Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.949088Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.955696Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52728","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.967340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52750","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.974508Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52756","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:12.981309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52774","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:13.027812Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:52788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T13:59:19.275699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"126.495639ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597277730511815 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/root-ca-cert-publisher\" value_size:127 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:59:19.275803Z","caller":"traceutil/trace.go:172","msg":"trace[1272736819] transaction","detail":"{read_only:false; response_revision:289; number_of_response:1; }","duration":"193.829973ms","start":"2025-11-24T13:59:19.081957Z","end":"2025-11-24T13:59:19.275787Z","steps":["trace[1272736819] 'process raft request'  (duration: 66.902783ms)","trace[1272736819] 'compare'  (duration: 126.406039ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:19.536375Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.596203ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597277730511818 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/disruption-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/disruption-controller\" value_size:126 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:59:19.536526Z","caller":"traceutil/trace.go:172","msg":"trace[1569707908] transaction","detail":"{read_only:false; response_revision:290; number_of_response:1; }","duration":"253.599277ms","start":"2025-11-24T13:59:19.282910Z","end":"2025-11-24T13:59:19.536509Z","steps":["trace[1569707908] 'process raft request'  (duration: 121.819336ms)","trace[1569707908] 'compare'  (duration: 131.490503ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:19.793186Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"125.742754ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597277730511823 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/statefulset-controller\" value_size:127 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:59:19.793264Z","caller":"traceutil/trace.go:172","msg":"trace[1782824535] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"248.222335ms","start":"2025-11-24T13:59:19.545031Z","end":"2025-11-24T13:59:19.793253Z","steps":["trace[1782824535] 'process raft request'  (duration: 122.372155ms)","trace[1782824535] 'compare'  (duration: 125.647575ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:20.152960Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.794031ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-24T13:59:20.153046Z","caller":"traceutil/trace.go:172","msg":"trace[159787259] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:293; }","duration":"132.895277ms","start":"2025-11-24T13:59:20.020132Z","end":"2025-11-24T13:59:20.153027Z","steps":["trace[159787259] 'agreement among raft nodes before linearized reading'  (duration: 18.499264ms)","trace[159787259] 'range keys from in-memory index tree'  (duration: 114.265374ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:20.153045Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"114.344915ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597277730511832 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/generic-garbage-collector\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-24T13:59:20.153228Z","caller":"traceutil/trace.go:172","msg":"trace[431843744] transaction","detail":"{read_only:false; response_revision:294; number_of_response:1; }","duration":"268.739014ms","start":"2025-11-24T13:59:19.884462Z","end":"2025-11-24T13:59:20.153201Z","steps":["trace[431843744] 'process raft request'  (duration: 154.197653ms)","trace[431843744] 'compare'  (duration: 114.225651ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T13:59:20.542236Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"160.749049ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler\" limit:1 ","response":"range_response_count:1 size:216"}
	{"level":"info","ts":"2025-11-24T13:59:20.542294Z","caller":"traceutil/trace.go:172","msg":"trace[796590260] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/horizontal-pod-autoscaler; range_end:; response_count:1; response_revision:295; }","duration":"160.822285ms","start":"2025-11-24T13:59:20.381455Z","end":"2025-11-24T13:59:20.542278Z","steps":["trace[796590260] 'range keys from in-memory index tree'  (duration: 160.681914ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T13:59:36.479381Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"157.594966ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-456660\" limit:1 ","response":"range_response_count:1 size:5589"}
	{"level":"info","ts":"2025-11-24T13:59:36.479553Z","caller":"traceutil/trace.go:172","msg":"trace[1608516697] range","detail":"{range_begin:/registry/minions/embed-certs-456660; range_end:; response_count:1; response_revision:394; }","duration":"157.781458ms","start":"2025-11-24T13:59:36.321757Z","end":"2025-11-24T13:59:36.479538Z","steps":["trace[1608516697] 'range keys from in-memory index tree'  (duration: 157.405779ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-24T14:00:10.594869Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"249.619742ms","expected-duration":"100ms","prefix":"","request":"header:<ID:9722597277730512362 > lease_revoke:<id:06ed9ab6294e9d55>","response":"size:28"}
	
	
	==> kernel <==
	 14:00:14 up  2:42,  0 user,  load average: 2.84, 2.93, 2.04
	Linux embed-certs-456660 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [05a0146f4a9872bede23f758c9e37e3b9f1e483041be8e2932126d527dd3d69c] <==
	I1124 13:59:22.155088       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 13:59:22.155293       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 13:59:22.155401       1 main.go:148] setting mtu 1500 for CNI 
	I1124 13:59:22.155418       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 13:59:22.155439       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T13:59:22Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 13:59:22.417848       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 13:59:22.417915       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 13:59:22.417929       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 13:59:22.418207       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 13:59:52.419457       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 13:59:52.419464       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1124 13:59:52.419466       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 13:59:52.419556       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1124 13:59:53.919267       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 13:59:53.919310       1 metrics.go:72] Registering metrics
	I1124 13:59:53.919373       1 controller.go:711] "Syncing nftables rules"
	I1124 14:00:02.421563       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:02.421626       1 main.go:301] handling current node
	I1124 14:00:12.421039       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:12.421103       1 main.go:301] handling current node
	
	
	==> kube-apiserver [586a7cdde2066ae997ce903a65e41e085cbe72f26cc152e03b16c2c82ff40659] <==
	I1124 13:59:13.627805       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 13:59:13.629781       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:59:13.629830       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:13.636245       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:13.636412       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 13:59:13.636831       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1124 13:59:13.804007       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 13:59:14.429848       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1124 13:59:14.433508       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1124 13:59:14.433523       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 13:59:14.881040       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 13:59:14.916710       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 13:59:15.035043       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1124 13:59:15.040807       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1124 13:59:15.042039       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 13:59:15.045865       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 13:59:15.486073       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 13:59:16.009986       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 13:59:16.019053       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1124 13:59:16.027826       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 13:59:21.037383       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 13:59:21.539192       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:21.544483       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 13:59:21.638052       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1124 14:00:13.096442       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:54962: use of closed network connection
	
	
	==> kube-controller-manager [025bbe0817f7b19f04fd5884aa950611e3ef140234518777451f07cef7cc5be4] <==
	I1124 13:59:20.623690       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-456660" podCIDRs=["10.244.0.0/24"]
	I1124 13:59:20.628030       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 13:59:20.633433       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 13:59:20.634708       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 13:59:20.634718       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1124 13:59:20.634740       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 13:59:20.634800       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 13:59:20.634823       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1124 13:59:20.634831       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 13:59:20.634938       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 13:59:20.634998       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 13:59:20.635120       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-456660"
	I1124 13:59:20.635173       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1124 13:59:20.636105       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 13:59:20.640164       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 13:59:20.640266       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 13:59:20.641419       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1124 13:59:20.643720       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 13:59:20.649903       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 13:59:20.658069       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 13:59:20.659303       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 13:59:20.659328       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 13:59:20.659337       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 13:59:20.671768       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:00:05.641163       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [532dafa2c9479a5ebf8bce70de8f66c67ff93b7b9d0713da6df2a38f3c3dc893] <==
	I1124 13:59:22.048362       1 server_linux.go:53] "Using iptables proxy"
	I1124 13:59:22.117444       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 13:59:22.217956       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 13:59:22.217997       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 13:59:22.218078       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 13:59:22.236312       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 13:59:22.236357       1 server_linux.go:132] "Using iptables Proxier"
	I1124 13:59:22.242020       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 13:59:22.242337       1 server.go:527] "Version info" version="v1.34.1"
	I1124 13:59:22.242363       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 13:59:22.244243       1 config.go:200] "Starting service config controller"
	I1124 13:59:22.244307       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 13:59:22.244243       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 13:59:22.244355       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 13:59:22.244273       1 config.go:106] "Starting endpoint slice config controller"
	I1124 13:59:22.244371       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 13:59:22.244998       1 config.go:309] "Starting node config controller"
	I1124 13:59:22.245016       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 13:59:22.245023       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 13:59:22.344797       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 13:59:22.344826       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 13:59:22.344803       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [4b5d2a2bea15d149f9529eb75368d336108a36a4fbc96ab3f4b5e62fb591b973] <==
	E1124 13:59:13.494879       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1124 13:59:13.494929       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:59:13.495009       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 13:59:13.495040       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:59:13.495044       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:59:13.495126       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:59:13.495137       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:59:13.495145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 13:59:13.495156       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:59:14.364942       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 13:59:14.390030       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 13:59:14.448341       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1124 13:59:14.463380       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1124 13:59:14.490948       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1124 13:59:14.501049       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 13:59:14.532933       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1124 13:59:14.610233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1124 13:59:14.621171       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1124 13:59:14.624233       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 13:59:14.630178       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1124 13:59:14.647168       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1124 13:59:14.681234       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1124 13:59:14.734634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1124 13:59:14.911507       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1124 13:59:17.892359       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 13:59:16 embed-certs-456660 kubelet[1307]: I1124 13:59:16.875727    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-456660" podStartSLOduration=1.875708027 podStartE2EDuration="1.875708027s" podCreationTimestamp="2025-11-24 13:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:16.866371681 +0000 UTC m=+1.109409326" watchObservedRunningTime="2025-11-24 13:59:16.875708027 +0000 UTC m=+1.118745672"
	Nov 24 13:59:16 embed-certs-456660 kubelet[1307]: I1124 13:59:16.875979    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-456660" podStartSLOduration=1.875963507 podStartE2EDuration="1.875963507s" podCreationTimestamp="2025-11-24 13:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:16.87594679 +0000 UTC m=+1.118984435" watchObservedRunningTime="2025-11-24 13:59:16.875963507 +0000 UTC m=+1.119001153"
	Nov 24 13:59:16 embed-certs-456660 kubelet[1307]: E1124 13:59:16.876313    1307 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-embed-certs-456660\" already exists" pod="kube-system/etcd-embed-certs-456660"
	Nov 24 13:59:16 embed-certs-456660 kubelet[1307]: I1124 13:59:16.885698    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-456660" podStartSLOduration=1.885679441 podStartE2EDuration="1.885679441s" podCreationTimestamp="2025-11-24 13:59:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:16.885542926 +0000 UTC m=+1.128580585" watchObservedRunningTime="2025-11-24 13:59:16.885679441 +0000 UTC m=+1.128717085"
	Nov 24 13:59:20 embed-certs-456660 kubelet[1307]: I1124 13:59:20.694120    1307 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 24 13:59:20 embed-certs-456660 kubelet[1307]: I1124 13:59:20.694906    1307 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668636    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/863794ad-b0f4-49ab-b19d-59bf97d060d2-kube-proxy\") pod \"kube-proxy-k5bxk\" (UID: \"863794ad-b0f4-49ab-b19d-59bf97d060d2\") " pod="kube-system/kube-proxy-k5bxk"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668697    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlmxf\" (UniqueName: \"kubernetes.io/projected/a307fcae-a790-4f9e-b917-01d795f6a487-kube-api-access-zlmxf\") pod \"kindnet-vlqg6\" (UID: \"a307fcae-a790-4f9e-b917-01d795f6a487\") " pod="kube-system/kindnet-vlqg6"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668727    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/863794ad-b0f4-49ab-b19d-59bf97d060d2-xtables-lock\") pod \"kube-proxy-k5bxk\" (UID: \"863794ad-b0f4-49ab-b19d-59bf97d060d2\") " pod="kube-system/kube-proxy-k5bxk"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668750    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xxkwk\" (UniqueName: \"kubernetes.io/projected/863794ad-b0f4-49ab-b19d-59bf97d060d2-kube-api-access-xxkwk\") pod \"kube-proxy-k5bxk\" (UID: \"863794ad-b0f4-49ab-b19d-59bf97d060d2\") " pod="kube-system/kube-proxy-k5bxk"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668782    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a307fcae-a790-4f9e-b917-01d795f6a487-xtables-lock\") pod \"kindnet-vlqg6\" (UID: \"a307fcae-a790-4f9e-b917-01d795f6a487\") " pod="kube-system/kindnet-vlqg6"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668805    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a307fcae-a790-4f9e-b917-01d795f6a487-lib-modules\") pod \"kindnet-vlqg6\" (UID: \"a307fcae-a790-4f9e-b917-01d795f6a487\") " pod="kube-system/kindnet-vlqg6"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668828    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/a307fcae-a790-4f9e-b917-01d795f6a487-cni-cfg\") pod \"kindnet-vlqg6\" (UID: \"a307fcae-a790-4f9e-b917-01d795f6a487\") " pod="kube-system/kindnet-vlqg6"
	Nov 24 13:59:21 embed-certs-456660 kubelet[1307]: I1124 13:59:21.668851    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/863794ad-b0f4-49ab-b19d-59bf97d060d2-lib-modules\") pod \"kube-proxy-k5bxk\" (UID: \"863794ad-b0f4-49ab-b19d-59bf97d060d2\") " pod="kube-system/kube-proxy-k5bxk"
	Nov 24 13:59:22 embed-certs-456660 kubelet[1307]: I1124 13:59:22.889652    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k5bxk" podStartSLOduration=1.8896325379999999 podStartE2EDuration="1.889632538s" podCreationTimestamp="2025-11-24 13:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:22.889465887 +0000 UTC m=+7.132503527" watchObservedRunningTime="2025-11-24 13:59:22.889632538 +0000 UTC m=+7.132670183"
	Nov 24 13:59:22 embed-certs-456660 kubelet[1307]: I1124 13:59:22.897787    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-vlqg6" podStartSLOduration=1.897773778 podStartE2EDuration="1.897773778s" podCreationTimestamp="2025-11-24 13:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 13:59:22.897627649 +0000 UTC m=+7.140665304" watchObservedRunningTime="2025-11-24 13:59:22.897773778 +0000 UTC m=+7.140811422"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.485107    1307 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.557438    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a310ffc3-e835-40a4-8bda-2c25e1f48e42-tmp\") pod \"storage-provisioner\" (UID: \"a310ffc3-e835-40a4-8bda-2c25e1f48e42\") " pod="kube-system/storage-provisioner"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.557491    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmmz9\" (UniqueName: \"kubernetes.io/projected/a310ffc3-e835-40a4-8bda-2c25e1f48e42-kube-api-access-cmmz9\") pod \"storage-provisioner\" (UID: \"a310ffc3-e835-40a4-8bda-2c25e1f48e42\") " pod="kube-system/storage-provisioner"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.557534    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/59037a19-9e26-4622-988f-10b221bec50f-config-volume\") pod \"coredns-66bc5c9577-nnp2c\" (UID: \"59037a19-9e26-4622-988f-10b221bec50f\") " pod="kube-system/coredns-66bc5c9577-nnp2c"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.557594    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb8rw\" (UniqueName: \"kubernetes.io/projected/59037a19-9e26-4622-988f-10b221bec50f-kube-api-access-tb8rw\") pod \"coredns-66bc5c9577-nnp2c\" (UID: \"59037a19-9e26-4622-988f-10b221bec50f\") " pod="kube-system/coredns-66bc5c9577-nnp2c"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.978029    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=41.978005467 podStartE2EDuration="41.978005467s" podCreationTimestamp="2025-11-24 13:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:02.977615213 +0000 UTC m=+47.220652862" watchObservedRunningTime="2025-11-24 14:00:02.978005467 +0000 UTC m=+47.221043111"
	Nov 24 14:00:02 embed-certs-456660 kubelet[1307]: I1124 14:00:02.990747    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-nnp2c" podStartSLOduration=41.990726521 podStartE2EDuration="41.990726521s" podCreationTimestamp="2025-11-24 13:59:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-24 14:00:02.990509138 +0000 UTC m=+47.233546796" watchObservedRunningTime="2025-11-24 14:00:02.990726521 +0000 UTC m=+47.233764169"
	Nov 24 14:00:05 embed-certs-456660 kubelet[1307]: I1124 14:00:05.073296    1307 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6j2qn\" (UniqueName: \"kubernetes.io/projected/de501807-9ee9-4a20-982b-0c68a8f2a4a7-kube-api-access-6j2qn\") pod \"busybox\" (UID: \"de501807-9ee9-4a20-982b-0c68a8f2a4a7\") " pod="default/busybox"
	Nov 24 14:00:06 embed-certs-456660 kubelet[1307]: I1124 14:00:06.994312    1307 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=2.192648694 podStartE2EDuration="2.994295385s" podCreationTimestamp="2025-11-24 14:00:04 +0000 UTC" firstStartedPulling="2025-11-24 14:00:05.319270963 +0000 UTC m=+49.562308587" lastFinishedPulling="2025-11-24 14:00:06.12091765 +0000 UTC m=+50.363955278" observedRunningTime="2025-11-24 14:00:06.993981513 +0000 UTC m=+51.237019158" watchObservedRunningTime="2025-11-24 14:00:06.994295385 +0000 UTC m=+51.237333029"
	
	
	==> storage-provisioner [fcaf3acb1e30efd6b80d8dc736b1b21913dd21295d2ee1596e05e13dc6995f27] <==
	I1124 14:00:02.918373       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:00:02.928960       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:00:02.929004       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:00:02.932006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:02.937018       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:00:02.937159       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:00:02.937247       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"651acb2c-b76c-4715-850b-34431f20fd28", APIVersion:"v1", ResourceVersion:"417", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-456660_ca4920dd-c7f9-4f1d-b034-6ce9971e28a9 became leader
	I1124 14:00:02.937346       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-456660_ca4920dd-c7f9-4f1d-b034-6ce9971e28a9!
	W1124 14:00:02.941565       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:02.947492       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:00:03.037599       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-456660_ca4920dd-c7f9-4f1d-b034-6ce9971e28a9!
	W1124 14:00:04.950658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:04.955704       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:06.959353       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:06.964648       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:08.968680       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:08.977025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:10.980121       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:11.079596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:13.083068       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:13.088183       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:15.091798       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:00:15.096026       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-456660 -n embed-certs-456660
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-456660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.84s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-305966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-305966 --alsologtostderr -v=1: exit status 80 (2.036195224s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-305966 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:00:17.472842  617539 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:00:17.473102  617539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:17.473111  617539 out.go:374] Setting ErrFile to fd 2...
	I1124 14:00:17.473115  617539 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:17.473322  617539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:00:17.473555  617539 out.go:368] Setting JSON to false
	I1124 14:00:17.473578  617539 mustload.go:66] Loading cluster: newest-cni-305966
	I1124 14:00:17.473939  617539 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:17.474331  617539 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:17.494643  617539 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:17.495071  617539 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:17.562085  617539 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:95 OomKillDisable:false NGoroutines:97 SystemTime:2025-11-24 14:00:17.55109911 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:17.562989  617539 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-305966 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:00:17.564992  617539 out.go:179] * Pausing node newest-cni-305966 ... 
	I1124 14:00:17.566085  617539 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:17.566420  617539 ssh_runner.go:195] Run: systemctl --version
	I1124 14:00:17.566472  617539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:17.586569  617539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:17.689983  617539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:00:17.701543  617539 pause.go:52] kubelet running: true
	I1124 14:00:17.701610  617539 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:00:17.834432  617539 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:00:17.834530  617539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:00:17.898833  617539 cri.go:89] found id: "93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341"
	I1124 14:00:17.898856  617539 cri.go:89] found id: "960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904"
	I1124 14:00:17.898860  617539 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:17.898863  617539 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:17.898866  617539 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:17.898869  617539 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:17.898872  617539 cri.go:89] found id: ""
	I1124 14:00:17.898931  617539 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:00:17.910595  617539 retry.go:31] will retry after 286.432627ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:17Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:00:18.198147  617539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:00:18.210792  617539 pause.go:52] kubelet running: false
	I1124 14:00:18.210866  617539 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:00:18.330834  617539 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:00:18.330926  617539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:00:18.401950  617539 cri.go:89] found id: "93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341"
	I1124 14:00:18.401979  617539 cri.go:89] found id: "960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904"
	I1124 14:00:18.401985  617539 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:18.401990  617539 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:18.401995  617539 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:18.402000  617539 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:18.402007  617539 cri.go:89] found id: ""
	I1124 14:00:18.402050  617539 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:00:18.413518  617539 retry.go:31] will retry after 297.243777ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:18Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:00:18.711001  617539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:00:18.724156  617539 pause.go:52] kubelet running: false
	I1124 14:00:18.724207  617539 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:00:18.841575  617539 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:00:18.841724  617539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:00:18.915530  617539 cri.go:89] found id: "93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341"
	I1124 14:00:18.915558  617539 cri.go:89] found id: "960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904"
	I1124 14:00:18.915565  617539 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:18.915570  617539 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:18.915575  617539 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:18.915588  617539 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:18.915591  617539 cri.go:89] found id: ""
	I1124 14:00:18.915635  617539 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:00:18.928138  617539 retry.go:31] will retry after 301.227829ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:18Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:00:19.230459  617539 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:00:19.247785  617539 pause.go:52] kubelet running: false
	I1124 14:00:19.247836  617539 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:00:19.361066  617539 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:00:19.361150  617539 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:00:19.427153  617539 cri.go:89] found id: "93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341"
	I1124 14:00:19.427184  617539 cri.go:89] found id: "960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904"
	I1124 14:00:19.427192  617539 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:19.427197  617539 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:19.427202  617539 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:19.427207  617539 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:19.427212  617539 cri.go:89] found id: ""
	I1124 14:00:19.427260  617539 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:00:19.441406  617539 out.go:203] 
	W1124 14:00:19.442522  617539 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:19Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:19Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:00:19.442542  617539 out.go:285] * 
	* 
	W1124 14:00:19.447138  617539 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:00:19.448203  617539 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-305966 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-305966
helpers_test.go:243: (dbg) docker inspect newest-cni-305966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0",
	        "Created": "2025-11-24T13:59:37.467773592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 611598,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:00:05.618436087Z",
	            "FinishedAt": "2025-11-24T14:00:04.74109777Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/hosts",
	        "LogPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0-json.log",
	        "Name": "/newest-cni-305966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-305966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-305966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0",
	                "LowerDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-305966",
	                "Source": "/var/lib/docker/volumes/newest-cni-305966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-305966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-305966",
	                "name.minikube.sigs.k8s.io": "newest-cni-305966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c055989e1384b9ede1ec9c4428c7779f3050a2fd2f5dc52bc67b27f0534a083a",
	            "SandboxKey": "/var/run/docker/netns/c055989e1384",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-305966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b817ca8b27f62f3a3563cdb6a0b78b72617f6f646af87e5319081625ae16c4aa",
	                    "EndpointID": "c3c54093c946c0b399f26403aab56cef2e88ee826799fe72637182f3a8d21313",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9a:31:00:cf:35:c4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-305966",
	                        "d5c8bb04c9a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966: exit status 2 (332.716501ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-305966 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p newest-cni-305966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ delete  │ -p kubernetes-upgrade-061040                                                                                                                                                                                                                  │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-305966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ start   │ -p auto-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-098307 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-456660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p embed-certs-456660 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ image   │ newest-cni-305966 image list --format=json                                                                                                                                                                                                    │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ pause   │ -p newest-cni-305966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:00:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:00:06.639955  612215 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:00:06.640075  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640086  612215 out.go:374] Setting ErrFile to fd 2...
	I1124 14:00:06.640093  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640294  612215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:00:06.640720  612215 out.go:368] Setting JSON to false
	I1124 14:00:06.641938  612215 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9754,"bootTime":1763983053,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:00:06.641994  612215 start.go:143] virtualization: kvm guest
	I1124 14:00:06.643737  612215 out.go:179] * [auto-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:00:06.644898  612215 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:00:06.644920  612215 notify.go:221] Checking for updates...
	I1124 14:00:06.647963  612215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:00:06.648994  612215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:06.650029  612215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:00:06.651162  612215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:00:06.652296  612215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:00:06.653986  612215 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654117  612215 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654273  612215 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654410  612215 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:00:06.678126  612215 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:00:06.678224  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.738636  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.729329331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.738783  612215 docker.go:319] overlay module found
	I1124 14:00:06.740540  612215 out.go:179] * Using the docker driver based on user configuration
	I1124 14:00:06.741585  612215 start.go:309] selected driver: docker
	I1124 14:00:06.741600  612215 start.go:927] validating driver "docker" against <nil>
	I1124 14:00:06.741610  612215 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:00:06.742172  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.795665  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.785850803 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.795856  612215 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:00:06.796102  612215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:00:06.797705  612215 out.go:179] * Using Docker driver with root privileges
	I1124 14:00:06.798793  612215 cni.go:84] Creating CNI manager for ""
	I1124 14:00:06.798864  612215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:06.798878  612215 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:00:06.798993  612215 start.go:353] cluster config:
	{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 14:00:06.800311  612215 out.go:179] * Starting "auto-165759" primary control-plane node in "auto-165759" cluster
	I1124 14:00:06.801369  612215 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:00:06.802406  612215 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:00:06.803315  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:06.803344  612215 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:00:06.803357  612215 cache.go:65] Caching tarball of preloaded images
	I1124 14:00:06.803391  612215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:00:06.803462  612215 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:00:06.803474  612215 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:00:06.803574  612215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json ...
	I1124 14:00:06.803604  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json: {Name:mkafcf12b893460417f613b5956b061b507857b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:06.824406  612215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:00:06.824437  612215 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:00:06.824457  612215 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:00:06.824499  612215 start.go:360] acquireMachinesLock for auto-165759: {Name:mke2972eaae0a3077df79966ba25decc1725d099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:00:06.824601  612215 start.go:364] duration metric: took 79.565µs to acquireMachinesLock for "auto-165759"
	I1124 14:00:06.824623  612215 start.go:93] Provisioning new machine with config: &{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:06.824701  612215 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:00:05.594487  611344 out.go:252] * Restarting existing docker container for "newest-cni-305966" ...
	I1124 14:00:05.594567  611344 cli_runner.go:164] Run: docker start newest-cni-305966
	I1124 14:00:06.042778  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:06.065571  611344 kic.go:430] container "newest-cni-305966" state is running.
	I1124 14:00:06.066040  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:06.086933  611344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/config.json ...
	I1124 14:00:06.087112  611344 machine.go:94] provisionDockerMachine start ...
	I1124 14:00:06.087177  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:06.106639  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:06.106975  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:06.106997  611344 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:00:06.107857  611344 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50272->127.0.0.1:33463: read: connection reset by peer
	I1124 14:00:09.250576  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-305966
	
	I1124 14:00:09.250635  611344 ubuntu.go:182] provisioning hostname "newest-cni-305966"
	I1124 14:00:09.250734  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.269624  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.269963  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.269989  611344 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-305966 && echo "newest-cni-305966" | sudo tee /etc/hostname
	I1124 14:00:09.423943  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-305966
	
	I1124 14:00:09.424043  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.442270  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.442573  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.442602  611344 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-305966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-305966/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-305966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:00:09.591505  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:00:09.591540  611344 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 14:00:09.591580  611344 ubuntu.go:190] setting up certificates
	I1124 14:00:09.591606  611344 provision.go:84] configureAuth start
	I1124 14:00:09.591678  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:09.608000  611344 provision.go:143] copyHostCerts
	I1124 14:00:09.608076  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 14:00:09.608092  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 14:00:09.608157  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 14:00:09.608283  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 14:00:09.608294  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 14:00:09.608331  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 14:00:09.608412  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 14:00:09.608421  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 14:00:09.608458  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 14:00:09.608524  611344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-305966 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-305966]
	I1124 14:00:09.795774  611344 provision.go:177] copyRemoteCerts
	I1124 14:00:09.795836  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:00:09.795882  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.815167  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:09.919696  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:00:09.937539  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:00:09.955598  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:00:09.972859  611344 provision.go:87] duration metric: took 381.236357ms to configureAuth
	I1124 14:00:09.972886  611344 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:00:09.973062  611344 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:09.973189  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.990437  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.990698  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.990720  611344 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:00:06.826428  612215 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:00:06.826727  612215 start.go:159] libmachine.API.Create for "auto-165759" (driver="docker")
	I1124 14:00:06.826765  612215 client.go:173] LocalClient.Create starting
	I1124 14:00:06.826856  612215 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 14:00:06.826912  612215 main.go:143] libmachine: Decoding PEM data...
	I1124 14:00:06.826942  612215 main.go:143] libmachine: Parsing certificate...
	I1124 14:00:06.827035  612215 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 14:00:06.827066  612215 main.go:143] libmachine: Decoding PEM data...
	I1124 14:00:06.827081  612215 main.go:143] libmachine: Parsing certificate...
	I1124 14:00:06.827525  612215 cli_runner.go:164] Run: docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:00:06.846848  612215 cli_runner.go:211] docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:00:06.846935  612215 network_create.go:284] running [docker network inspect auto-165759] to gather additional debugging logs...
	I1124 14:00:06.846959  612215 cli_runner.go:164] Run: docker network inspect auto-165759
	W1124 14:00:06.864846  612215 cli_runner.go:211] docker network inspect auto-165759 returned with exit code 1
	I1124 14:00:06.864876  612215 network_create.go:287] error running [docker network inspect auto-165759]: docker network inspect auto-165759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-165759 not found
	I1124 14:00:06.864905  612215 network_create.go:289] output of [docker network inspect auto-165759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-165759 not found
	
	** /stderr **
	I1124 14:00:06.865080  612215 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:06.883080  612215 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 14:00:06.884122  612215 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 14:00:06.884636  612215 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 14:00:06.885473  612215 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea31e0}
	I1124 14:00:06.885501  612215 network_create.go:124] attempt to create docker network auto-165759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:00:06.885543  612215 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-165759 auto-165759
	I1124 14:00:06.935118  612215 network_create.go:108] docker network auto-165759 192.168.76.0/24 created
	I1124 14:00:06.935167  612215 kic.go:121] calculated static IP "192.168.76.2" for the "auto-165759" container
	I1124 14:00:06.935270  612215 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:00:06.954577  612215 cli_runner.go:164] Run: docker volume create auto-165759 --label name.minikube.sigs.k8s.io=auto-165759 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:00:06.973699  612215 oci.go:103] Successfully created a docker volume auto-165759
	I1124 14:00:06.973773  612215 cli_runner.go:164] Run: docker run --rm --name auto-165759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-165759 --entrypoint /usr/bin/test -v auto-165759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:00:07.370078  612215 oci.go:107] Successfully prepared a docker volume auto-165759
	I1124 14:00:07.370137  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:07.370149  612215 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:00:07.370222  612215 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:00:11.272332  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:00:11.272357  611344 machine.go:97] duration metric: took 5.185230585s to provisionDockerMachine
	I1124 14:00:11.272370  611344 start.go:293] postStartSetup for "newest-cni-305966" (driver="docker")
	I1124 14:00:11.272381  611344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:00:11.272443  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:00:11.272503  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.289956  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.390985  611344 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:00:11.394433  611344 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:00:11.394458  611344 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:00:11.394469  611344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 14:00:11.394523  611344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 14:00:11.394620  611344 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 14:00:11.394726  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:00:11.402145  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:11.419357  611344 start.go:296] duration metric: took 146.974942ms for postStartSetup
	I1124 14:00:11.419428  611344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:00:11.419465  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.436677  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.536483  611344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:00:11.541073  611344 fix.go:56] duration metric: took 5.96583724s for fixHost
	I1124 14:00:11.541099  611344 start.go:83] releasing machines lock for "newest-cni-305966", held for 5.965887021s
	I1124 14:00:11.541180  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:11.558308  611344 ssh_runner.go:195] Run: cat /version.json
	I1124 14:00:11.558360  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.558426  611344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:00:11.558516  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.575458  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.576178  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.671465  611344 ssh_runner.go:195] Run: systemctl --version
	I1124 14:00:11.727243  611344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:00:11.759992  611344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:00:11.764307  611344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:00:11.764355  611344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:00:11.771850  611344 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:00:11.771871  611344 start.go:496] detecting cgroup driver to use...
	I1124 14:00:11.771914  611344 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 14:00:11.771962  611344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:00:11.785647  611344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:00:11.798863  611344 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:00:11.798925  611344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:00:11.815812  611344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:00:11.830569  611344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:00:11.926016  611344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:00:12.019537  611344 docker.go:234] disabling docker service ...
	I1124 14:00:12.019605  611344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:00:12.035763  611344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:00:12.048780  611344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:00:12.132824  611344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:00:12.228446  611344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:00:12.241308  611344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:00:12.255879  611344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:00:12.256027  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.265416  611344 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 14:00:12.265479  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.274340  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.282705  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.294801  611344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:00:12.303924  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.312281  611344 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.319919  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.327933  611344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:00:12.334789  611344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:00:12.341614  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:12.455849  611344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:00:12.614518  611344 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:00:12.614593  611344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:00:12.619358  611344 start.go:564] Will wait 60s for crictl version
	I1124 14:00:12.619421  611344 ssh_runner.go:195] Run: which crictl
	I1124 14:00:12.623479  611344 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:00:12.648222  611344 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:00:12.648299  611344 ssh_runner.go:195] Run: crio --version
	I1124 14:00:12.677057  611344 ssh_runner.go:195] Run: crio --version
	I1124 14:00:12.706583  611344 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:00:12.707690  611344 cli_runner.go:164] Run: docker network inspect newest-cni-305966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:12.727532  611344 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 14:00:12.731833  611344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:12.744130  611344 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:00:12.745252  611344 kubeadm.go:884] updating cluster {Name:newest-cni-305966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:00:12.745403  611344 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:12.745468  611344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:12.779183  611344 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:12.779205  611344 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:00:12.779259  611344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:12.806328  611344 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:12.806352  611344 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:00:12.806360  611344 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 14:00:12.806482  611344 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-305966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:00:12.806572  611344 ssh_runner.go:195] Run: crio config
	I1124 14:00:12.854458  611344 cni.go:84] Creating CNI manager for ""
	I1124 14:00:12.854479  611344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:12.854496  611344 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:00:12.854518  611344 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-305966 NodeName:newest-cni-305966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:00:12.854638  611344 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-305966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:00:12.854692  611344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:00:12.862451  611344 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:00:12.862531  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:00:12.869871  611344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:00:12.882752  611344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:00:12.897294  611344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 14:00:12.910632  611344 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:00:12.914337  611344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:12.923870  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:13.008239  611344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:13.038363  611344 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966 for IP: 192.168.94.2
	I1124 14:00:13.038387  611344 certs.go:195] generating shared ca certs ...
	I1124 14:00:13.038406  611344 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.038582  611344 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 14:00:13.038637  611344 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 14:00:13.038650  611344 certs.go:257] generating profile certs ...
	I1124 14:00:13.038762  611344 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/client.key
	I1124 14:00:13.038836  611344 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.key.707ba182
	I1124 14:00:13.038907  611344 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.key
	I1124 14:00:13.039052  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 14:00:13.039096  611344 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 14:00:13.039108  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:00:13.039141  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:00:13.039174  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:00:13.039205  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 14:00:13.039265  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:13.040322  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:00:13.063001  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:00:13.085566  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:00:13.106652  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 14:00:13.127998  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:00:13.152231  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:00:13.171960  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:00:13.189487  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:00:13.207088  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:00:13.224385  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 14:00:13.241576  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 14:00:13.259033  611344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:00:13.270834  611344 ssh_runner.go:195] Run: openssl version
	I1124 14:00:13.276982  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:00:13.284965  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.288421  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.288470  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.327459  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:00:13.336104  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 14:00:13.345465  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.350000  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.350052  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.394543  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 14:00:13.403198  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 14:00:13.413018  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.417473  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.417525  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.459516  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:00:13.467973  611344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:00:13.471733  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:00:13.515058  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:00:13.559630  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:00:13.610038  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:00:13.659104  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:00:13.720195  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:00:13.780380  611344 kubeadm.go:401] StartCluster: {Name:newest-cni-305966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:00:13.780515  611344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:00:13.780595  611344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:00:13.818024  611344 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:13.818049  611344 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:13.818054  611344 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:13.818058  611344 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:13.818062  611344 cri.go:89] found id: ""
	I1124 14:00:13.818103  611344 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:00:13.833826  611344 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:13Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:00:13.833927  611344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:00:13.848758  611344 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:00:13.848777  611344 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:00:13.848821  611344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:00:13.860299  611344 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:00:13.861443  611344 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-305966" does not appear in /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:13.862296  611344 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-348000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-305966" cluster setting kubeconfig missing "newest-cni-305966" context setting]
	I1124 14:00:13.863520  611344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.865937  611344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:00:13.878751  611344 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 14:00:13.878798  611344 kubeadm.go:602] duration metric: took 30.013528ms to restartPrimaryControlPlane
	I1124 14:00:13.878809  611344 kubeadm.go:403] duration metric: took 98.440465ms to StartCluster
	I1124 14:00:13.878826  611344 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.878902  611344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:13.881473  611344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.881833  611344 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:13.882038  611344 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:00:13.882140  611344 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-305966"
	I1124 14:00:13.882152  611344 addons.go:70] Setting dashboard=true in profile "newest-cni-305966"
	I1124 14:00:13.882187  611344 addons.go:239] Setting addon dashboard=true in "newest-cni-305966"
	W1124 14:00:13.882200  611344 addons.go:248] addon dashboard should already be in state true
	I1124 14:00:13.882233  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.882383  611344 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:13.882737  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.882159  611344 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-305966"
	W1124 14:00:13.882915  611344 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:00:13.882963  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.882169  611344 addons.go:70] Setting default-storageclass=true in profile "newest-cni-305966"
	I1124 14:00:13.883343  611344 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-305966"
	I1124 14:00:13.883464  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.883734  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.887025  611344 out.go:179] * Verifying Kubernetes components...
	I1124 14:00:13.888266  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:13.912373  611344 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:00:13.913625  611344 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:13.913675  611344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:00:13.913739  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.917705  611344 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:00:13.918797  611344 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:00:13.920220  611344 addons.go:239] Setting addon default-storageclass=true in "newest-cni-305966"
	W1124 14:00:13.920278  611344 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:00:13.920322  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.920963  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.923142  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:00:13.923221  611344 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:00:13.923316  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.950962  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:13.966436  611344 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:13.966459  611344 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:00:13.966520  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.971990  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:13.993363  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:14.090261  611344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:14.103740  611344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:14.120848  611344 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:00:14.121027  611344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:00:14.136621  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:00:14.136649  611344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:00:14.162367  611344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:14.174687  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:00:14.174713  611344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:00:14.199721  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:00:14.199762  611344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:00:14.218224  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:00:14.218251  611344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:00:14.238865  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:00:14.238923  611344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:00:14.258602  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:00:14.258621  611344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:00:14.276787  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:00:14.276832  611344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:00:14.294104  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:00:14.294129  611344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:00:14.311561  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:00:14.311583  611344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:00:14.329534  611344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:00:11.793592  612215 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.423324856s)
	I1124 14:00:11.793625  612215 kic.go:203] duration metric: took 4.423471856s to extract preloaded images to volume ...
	W1124 14:00:11.793727  612215 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 14:00:11.793767  612215 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 14:00:11.793827  612215 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:00:11.851109  612215 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-165759 --name auto-165759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-165759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-165759 --network auto-165759 --ip 192.168.76.2 --volume auto-165759:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:00:12.147656  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Running}}
	I1124 14:00:12.170110  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Status}}
	I1124 14:00:12.187432  612215 cli_runner.go:164] Run: docker exec auto-165759 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:00:12.231829  612215 oci.go:144] the created container "auto-165759" has a running status.
	I1124 14:00:12.231859  612215 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa...
	I1124 14:00:12.425376  612215 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:00:12.458732  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Status}}
	I1124 14:00:12.478104  612215 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:00:12.478152  612215 kic_runner.go:114] Args: [docker exec --privileged auto-165759 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:00:12.524590  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Status}}
	I1124 14:00:12.544431  612215 machine.go:94] provisionDockerMachine start ...
	I1124 14:00:12.544551  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:12.563666  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:12.563954  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:12.563968  612215 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:00:12.711809  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-165759
	
	I1124 14:00:12.711840  612215 ubuntu.go:182] provisioning hostname "auto-165759"
	I1124 14:00:12.711961  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:12.730969  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:12.731259  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:12.731276  612215 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-165759 && echo "auto-165759" | sudo tee /etc/hostname
	I1124 14:00:12.888489  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-165759
	
	I1124 14:00:12.888559  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:12.907210  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:12.907417  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:12.907439  612215 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-165759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-165759/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-165759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:00:13.053587  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:00:13.053614  612215 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 14:00:13.053639  612215 ubuntu.go:190] setting up certificates
	I1124 14:00:13.053661  612215 provision.go:84] configureAuth start
	I1124 14:00:13.053726  612215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-165759
	I1124 14:00:13.074607  612215 provision.go:143] copyHostCerts
	I1124 14:00:13.074673  612215 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 14:00:13.074687  612215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 14:00:13.074767  612215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 14:00:13.074990  612215 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 14:00:13.075007  612215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 14:00:13.075053  612215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 14:00:13.075161  612215 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 14:00:13.075174  612215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 14:00:13.075209  612215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 14:00:13.075301  612215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.auto-165759 san=[127.0.0.1 192.168.76.2 auto-165759 localhost minikube]
	I1124 14:00:13.153839  612215 provision.go:177] copyRemoteCerts
	I1124 14:00:13.153910  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:00:13.153957  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.174492  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:13.278186  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:00:13.297125  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 14:00:13.314074  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:00:13.333122  612215 provision.go:87] duration metric: took 279.448871ms to configureAuth
	I1124 14:00:13.333150  612215 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:00:13.333349  612215 config.go:182] Loaded profile config "auto-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:13.333470  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.352220  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:13.352481  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:13.352499  612215 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:00:13.670530  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:00:13.670559  612215 machine.go:97] duration metric: took 1.126102746s to provisionDockerMachine
	I1124 14:00:13.670573  612215 client.go:176] duration metric: took 6.843800736s to LocalClient.Create
	I1124 14:00:13.670587  612215 start.go:167] duration metric: took 6.843861689s to libmachine.API.Create "auto-165759"
	I1124 14:00:13.670601  612215 start.go:293] postStartSetup for "auto-165759" (driver="docker")
	I1124 14:00:13.670616  612215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:00:13.670685  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:00:13.670738  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.698138  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:13.820996  612215 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:00:13.825414  612215 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:00:13.825444  612215 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:00:13.825456  612215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 14:00:13.825505  612215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 14:00:13.825601  612215 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 14:00:13.825709  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:00:13.835417  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:13.863041  612215 start.go:296] duration metric: took 192.424811ms for postStartSetup
	I1124 14:00:13.863433  612215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-165759
	I1124 14:00:13.888619  612215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json ...
	I1124 14:00:13.888908  612215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:00:13.888966  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.920902  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:14.050594  612215 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:00:14.063258  612215 start.go:128] duration metric: took 7.238541779s to createHost
	I1124 14:00:14.064625  612215 start.go:83] releasing machines lock for "auto-165759", held for 7.24000636s
	I1124 14:00:14.064709  612215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-165759
	I1124 14:00:14.092621  612215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:00:14.092724  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:14.093184  612215 ssh_runner.go:195] Run: cat /version.json
	I1124 14:00:14.093367  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:14.122464  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:14.124086  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:14.251712  612215 ssh_runner.go:195] Run: systemctl --version
	I1124 14:00:14.338825  612215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:00:14.391538  612215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:00:14.397502  612215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:00:14.397562  612215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:00:14.434564  612215 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 14:00:14.434583  612215 start.go:496] detecting cgroup driver to use...
	I1124 14:00:14.434677  612215 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 14:00:14.434718  612215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:00:14.455600  612215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:00:14.471560  612215 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:00:14.471606  612215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:00:14.492846  612215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:00:14.516666  612215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:00:14.628047  612215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:00:14.762541  612215 docker.go:234] disabling docker service ...
	I1124 14:00:14.762611  612215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:00:14.788500  612215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:00:14.804916  612215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:00:14.914457  612215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:00:15.017311  612215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:00:15.033054  612215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:00:15.050765  612215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:00:15.050831  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.067688  612215 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 14:00:15.067758  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.078836  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.088784  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.098815  612215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:00:15.107291  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.115984  612215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.130612  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.141075  612215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:00:15.148885  612215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:00:15.158081  612215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:15.284432  612215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:00:15.458151  612215 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:00:15.458218  612215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:00:15.462710  612215 start.go:564] Will wait 60s for crictl version
	I1124 14:00:15.462768  612215 ssh_runner.go:195] Run: which crictl
	I1124 14:00:15.466603  612215 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:00:15.514239  612215 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:00:15.514336  612215 ssh_runner.go:195] Run: crio --version
	I1124 14:00:15.571990  612215 ssh_runner.go:195] Run: crio --version
	I1124 14:00:15.624374  612215 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:00:16.229256  611344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.125432983s)
	I1124 14:00:16.229331  611344 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.10825543s)
	I1124 14:00:16.229372  611344 api_server.go:72] duration metric: took 2.347389168s to wait for apiserver process to appear ...
	I1124 14:00:16.229383  611344 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:00:16.229406  611344 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 14:00:16.229418  611344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067014819s)
	I1124 14:00:16.229561  611344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.899992686s)
	I1124 14:00:16.231089  611344 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-305966 addons enable metrics-server
	
	I1124 14:00:16.234148  611344 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:00:16.234173  611344 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:00:16.242061  611344 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 14:00:15.625433  612215 cli_runner.go:164] Run: docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:15.649305  612215 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:00:15.658707  612215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:15.671300  612215 kubeadm.go:884] updating cluster {Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:00:15.671506  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:15.671650  612215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:15.711238  612215 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:15.711262  612215 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:00:15.711316  612215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:15.746472  612215 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:15.746504  612215 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:00:15.746517  612215 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 14:00:15.746625  612215 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-165759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:00:15.746716  612215 ssh_runner.go:195] Run: crio config
	I1124 14:00:15.807927  612215 cni.go:84] Creating CNI manager for ""
	I1124 14:00:15.807956  612215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:15.807976  612215 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:00:15.808011  612215 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-165759 NodeName:auto-165759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:00:15.808179  612215 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-165759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:00:15.808253  612215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:00:15.818192  612215 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:00:15.818241  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:00:15.830005  612215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1124 14:00:15.846104  612215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:00:15.868243  612215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1124 14:00:15.884720  612215 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:00:15.889876  612215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:15.901296  612215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:16.018752  612215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:16.043555  612215 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759 for IP: 192.168.76.2
	I1124 14:00:16.043582  612215 certs.go:195] generating shared ca certs ...
	I1124 14:00:16.043601  612215 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.043768  612215 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 14:00:16.043833  612215 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 14:00:16.043844  612215 certs.go:257] generating profile certs ...
	I1124 14:00:16.043975  612215 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.key
	I1124 14:00:16.043991  612215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.crt with IP's: []
	I1124 14:00:16.293620  612215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.crt ...
	I1124 14:00:16.293642  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.crt: {Name:mkfb66e3072b561f19728f93c659ef2477f49e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.293788  612215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.key ...
	I1124 14:00:16.293799  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.key: {Name:mk8c2a3b70815de053602eb3df8d6e8f095c6713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.293876  612215 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3
	I1124 14:00:16.293899  612215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:00:16.367561  612215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3 ...
	I1124 14:00:16.367580  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3: {Name:mk24f9dfe444fcae68d949bf267d3a252d85caa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.367694  612215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3 ...
	I1124 14:00:16.367708  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3: {Name:mkc3bc46d4e34d08ca14cc3de17f7084014b9532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.367781  612215 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt
	I1124 14:00:16.367851  612215 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key
	I1124 14:00:16.367919  612215 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key
	I1124 14:00:16.367934  612215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt with IP's: []
	I1124 14:00:16.403104  612215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt ...
	I1124 14:00:16.403121  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt: {Name:mkb8228759e3aa4c9c79f27c19473d40f82f511b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.403222  612215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key ...
	I1124 14:00:16.403232  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key: {Name:mkecd521882f11803819669731e8cd97a748e878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.403397  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 14:00:16.403431  612215 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 14:00:16.403441  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:00:16.403464  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:00:16.403488  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:00:16.403512  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 14:00:16.403550  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:16.404121  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:00:16.422094  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:00:16.441628  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:00:16.460620  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 14:00:16.479189  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 14:00:16.497037  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 14:00:16.517251  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:00:16.537668  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:00:16.556912  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:00:16.577846  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 14:00:16.597943  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 14:00:16.616957  612215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:00:16.630882  612215 ssh_runner.go:195] Run: openssl version
	I1124 14:00:16.637881  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 14:00:16.243058  611344 addons.go:530] duration metric: took 2.361038584s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 14:00:16.730169  611344 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 14:00:16.734323  611344 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 14:00:16.735266  611344 api_server.go:141] control plane version: v1.34.1
	I1124 14:00:16.735292  611344 api_server.go:131] duration metric: took 505.902315ms to wait for apiserver health ...
	I1124 14:00:16.735304  611344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:00:16.738854  611344 system_pods.go:59] 8 kube-system pods found
	I1124 14:00:16.738899  611344 system_pods.go:61] "coredns-66bc5c9577-z4d5k" [a925cbe1-f3d5-4821-a1bf-afc3d3ed1062] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:00:16.738911  611344 system_pods.go:61] "etcd-newest-cni-305966" [f603e9b8-89c7-4735-97bb-82e67ab5fccd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:00:16.738920  611344 system_pods.go:61] "kindnet-7c2kd" [353470b5-271a-4976-9823-aae696867ae3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 14:00:16.738933  611344 system_pods.go:61] "kube-apiserver-newest-cni-305966" [4bbaeb61-1730-4352-815e-afc398299d99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:00:16.738948  611344 system_pods.go:61] "kube-controller-manager-newest-cni-305966" [caf78e4b-40b4-467b-ade5-44a85043db3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:00:16.738960  611344 system_pods.go:61] "kube-proxy-bwchr" [d1715fb6-8be2-493f-81c7-9e606cca9736] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 14:00:16.738971  611344 system_pods.go:61] "kube-scheduler-newest-cni-305966" [60d22afb-3af1-42aa-bce3-f4bc578e68ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:00:16.738982  611344 system_pods.go:61] "storage-provisioner" [408ded79-aabb-4020-867d-a7c3da485d56] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:00:16.738990  611344 system_pods.go:74] duration metric: took 3.678015ms to wait for pod list to return data ...
	I1124 14:00:16.739001  611344 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:00:16.741118  611344 default_sa.go:45] found service account: "default"
	I1124 14:00:16.741137  611344 default_sa.go:55] duration metric: took 2.126918ms for default service account to be created ...
	I1124 14:00:16.741149  611344 kubeadm.go:587] duration metric: took 2.859166676s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:00:16.741170  611344 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:00:16.743473  611344 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 14:00:16.743502  611344 node_conditions.go:123] node cpu capacity is 8
	I1124 14:00:16.743518  611344 node_conditions.go:105] duration metric: took 2.342524ms to run NodePressure ...
	I1124 14:00:16.743532  611344 start.go:242] waiting for startup goroutines ...
	I1124 14:00:16.743547  611344 start.go:247] waiting for cluster config update ...
	I1124 14:00:16.743562  611344 start.go:256] writing updated cluster config ...
	I1124 14:00:16.743866  611344 ssh_runner.go:195] Run: rm -f paused
	I1124 14:00:16.793638  611344 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 14:00:16.795428  611344 out.go:179] * Done! kubectl is now configured to use "newest-cni-305966" cluster and "default" namespace by default
	I1124 14:00:16.647204  612215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 14:00:16.651320  612215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 14:00:16.651367  612215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 14:00:16.698089  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 14:00:16.708151  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 14:00:16.717574  612215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 14:00:16.721678  612215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 14:00:16.721743  612215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 14:00:16.766691  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:00:16.775921  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:00:16.784481  612215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:16.788372  612215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:16.788422  612215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:16.830541  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:00:16.839593  612215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:00:16.843695  612215 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:00:16.843739  612215 kubeadm.go:401] StartCluster: {Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:00:16.843808  612215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:00:16.843883  612215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:00:16.875196  612215 cri.go:89] found id: ""
	I1124 14:00:16.875260  612215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:00:16.884482  612215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:00:16.894253  612215 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:00:16.894311  612215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:00:16.902785  612215 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:00:16.902802  612215 kubeadm.go:158] found existing configuration files:
	
	I1124 14:00:16.902843  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:00:16.911622  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:00:16.911684  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:00:16.919913  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:00:16.927966  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:00:16.928022  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:00:16.935633  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:00:16.943371  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:00:16.943426  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:00:16.951152  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:00:16.958295  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:00:16.958340  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:00:16.965184  612215 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:00:17.004327  612215 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:00:17.004411  612215 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:00:17.025121  612215 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:00:17.025207  612215 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 14:00:17.025261  612215 kubeadm.go:319] OS: Linux
	I1124 14:00:17.025361  612215 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:00:17.025463  612215 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:00:17.025518  612215 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:00:17.025589  612215 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:00:17.025664  612215 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:00:17.025744  612215 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:00:17.025834  612215 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:00:17.025910  612215 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 14:00:17.102227  612215 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:00:17.102576  612215 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:00:17.103416  612215 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:00:17.112118  612215 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.425158007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.427588856Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ec1b4782-db86-474f-900f-7256d40e75f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.428211088Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=066de6ea-c5be-4bcd-8403-ff846e5ea720 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.429141198Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.429696936Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.430046588Z" level=info msg="Ran pod sandbox 0b0017a7c8597dcb404ec14d596076c8a3ae3ce6bdaa5dcdd4f8e09285b18b4f with infra container: kube-system/kube-proxy-bwchr/POD" id=ec1b4782-db86-474f-900f-7256d40e75f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.430511638Z" level=info msg="Ran pod sandbox 7f71b60c0073d34b0ba987199ee21b8e15ce8ca0e28afe0070943950ab744991 with infra container: kube-system/kindnet-7c2kd/POD" id=066de6ea-c5be-4bcd-8403-ff846e5ea720 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.431090177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dcf07394-3260-43b2-8c08-0f4d0436de84 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.431356077Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=173572c7-5f36-418c-9ac9-a6da545065ee name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.43200066Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=65a87200-6ce5-48cf-8ab2-ec488485082a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.432259244Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d0b517c5-4129-4671-b3a6-a4fe6e17aa34 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.432947141Z" level=info msg="Creating container: kube-system/kube-proxy-bwchr/kube-proxy" id=b93727d9-d0a5-4bbd-ae24-ab881603c1a8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.433056247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.433347205Z" level=info msg="Creating container: kube-system/kindnet-7c2kd/kindnet-cni" id=ab5cbbed-0c2c-4d62-8890-91a4ddfaacac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.433432874Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.438503169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.438924898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.439025875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.439385887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.465472433Z" level=info msg="Created container 93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341: kube-system/kindnet-7c2kd/kindnet-cni" id=ab5cbbed-0c2c-4d62-8890-91a4ddfaacac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.466057702Z" level=info msg="Starting container: 93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341" id=7479948e-450e-448f-bb37-d5958a27f492 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.46780103Z" level=info msg="Started container" PID=1050 containerID=93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341 description=kube-system/kindnet-7c2kd/kindnet-cni id=7479948e-450e-448f-bb37-d5958a27f492 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f71b60c0073d34b0ba987199ee21b8e15ce8ca0e28afe0070943950ab744991
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.468749822Z" level=info msg="Created container 960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904: kube-system/kube-proxy-bwchr/kube-proxy" id=b93727d9-d0a5-4bbd-ae24-ab881603c1a8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.469268218Z" level=info msg="Starting container: 960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904" id=fa37bb19-9405-4a35-86aa-147ddc2efbd5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.471804221Z" level=info msg="Started container" PID=1051 containerID=960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904 description=kube-system/kube-proxy-bwchr/kube-proxy id=fa37bb19-9405-4a35-86aa-147ddc2efbd5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b0017a7c8597dcb404ec14d596076c8a3ae3ce6bdaa5dcdd4f8e09285b18b4f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	93ca1d74735d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   7f71b60c0073d       kindnet-7c2kd                               kube-system
	960f84874b52e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   0b0017a7c8597       kube-proxy-bwchr                            kube-system
	d89776e70ad5c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   6 seconds ago       Running             kube-apiserver            1                   1cc2842573d61       kube-apiserver-newest-cni-305966            kube-system
	e119d99662f18       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   6 seconds ago       Running             etcd                      1                   f8e2e5354e4ea       etcd-newest-cni-305966                      kube-system
	dfa15d58172bf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   6 seconds ago       Running             kube-scheduler            1                   e1ff60f922a27       kube-scheduler-newest-cni-305966            kube-system
	4ad843941afeb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   6 seconds ago       Running             kube-controller-manager   1                   ced98c89db1d8       kube-controller-manager-newest-cni-305966   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-305966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-305966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=newest-cni-305966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-305966
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-305966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ecbd5efe-848f-483d-9396-2b651bf1384a
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-305966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-7c2kd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-305966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-305966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-bwchr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-305966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 20s   kube-proxy       
	  Normal  Starting                 3s    kube-proxy       
	  Normal  Starting                 27s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  26s   kubelet          Node newest-cni-305966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    26s   kubelet          Node newest-cni-305966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     26s   kubelet          Node newest-cni-305966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22s   node-controller  Node newest-cni-305966 event: Registered Node newest-cni-305966 in Controller
	  Normal  RegisteredNode           2s    node-controller  Node newest-cni-305966 event: Registered Node newest-cni-305966 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5] <==
	{"level":"warn","ts":"2025-11-24T14:00:14.752095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.761148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.772866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.789557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.799241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.806459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.812840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.820060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.827294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.835622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.847934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.858657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.866798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.873772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.880425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.887245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.894148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.900198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.908176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.915445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.921976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.936616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.949029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.969806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:15.014971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:00:20 up  2:42,  0 user,  load average: 3.01, 2.96, 2.06
	Linux newest-cni-305966 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341] <==
	I1124 14:00:16.699736       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:16.766644       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 14:00:16.766823       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:16.766852       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:16.766880       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:16.899747       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:16.899797       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:16.899810       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:16.998773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:00:17.300005       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:00:17.300103       1 metrics.go:72] Registering metrics
	I1124 14:00:17.300196       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc] <==
	I1124 14:00:15.603525       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:00:15.603597       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:00:15.603971       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1124 14:00:15.610718       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:00:15.611692       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 14:00:15.611772       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:00:15.611797       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:00:15.611805       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:00:15.611812       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:15.619322       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:00:15.638258       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:00:15.640260       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:15.968793       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:00:15.996073       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:00:16.019245       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:16.034402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:16.050210       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:00:16.092273       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.32.19"}
	I1124 14:00:16.102678       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.153.146"}
	I1124 14:00:16.501809       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:18.926832       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:18.926869       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:19.328225       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:00:19.427147       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:00:19.527224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7] <==
	I1124 14:00:18.888916       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:00:18.890567       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:00:18.900850       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:00:18.900924       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:00:18.900962       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:00:18.900976       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:00:18.900984       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:00:18.902205       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:00:18.913472       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 14:00:18.922926       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:00:18.922952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:00:18.922962       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 14:00:18.923055       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:00:18.924155       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:00:18.924239       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:00:18.924280       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:00:18.926494       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:00:18.929107       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:18.929955       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:00:18.932558       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:00:18.938850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:00:18.938864       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:00:18.938872       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:00:18.940964       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:00:18.948059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904] <==
	I1124 14:00:16.507502       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:00:16.568112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:16.668871       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:16.668927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 14:00:16.669054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:16.689910       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:16.689976       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:16.694935       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:16.695356       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:16.695384       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:16.697358       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:16.697371       1 config.go:200] "Starting service config controller"
	I1124 14:00:16.697396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:16.697409       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:16.697416       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:16.697398       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:16.697624       1 config.go:309] "Starting node config controller"
	I1124 14:00:16.697638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:16.697646       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:16.797577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:16.797605       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:00:16.797586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5] <==
	I1124 14:00:14.903162       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:00:15.908652       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:00:15.908702       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:15.913863       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:00:15.913866       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:15.913926       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:15.913940       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:15.913963       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:15.913925       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:00:15.914354       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:00:15.914388       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:00:16.015162       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:00:16.015295       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:16.015356       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.620212     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.632024     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-305966\" already exists" pod="kube-system/kube-scheduler-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.632075     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.638637     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-305966\" already exists" pod="kube-system/etcd-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.638666     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.642547     677 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.642911     677 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.643163     677 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.644306     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.645942     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-305966\" already exists" pod="kube-system/kube-apiserver-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.645970     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.661852     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-305966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.972751     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.981170     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-305966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.117305     677 apiserver.go:52] "Watching apiserver"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.121366     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.144956     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-lib-modules\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145021     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1715fb6-8be2-493f-81c7-9e606cca9736-xtables-lock\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145049     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-cni-cfg\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145112     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-xtables-lock\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145173     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1715fb6-8be2-493f-81c7-9e606cca9736-lib-modules\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 14:00:17 newest-cni-305966 kubelet[677]: I1124 14:00:17.812419     677 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 14:00:17 newest-cni-305966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:00:17 newest-cni-305966 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:00:17 newest-cni-305966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305966 -n newest-cni-305966
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305966 -n newest-cni-305966: exit status 2 (346.737005ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-305966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch: exit status 1 (70.629492ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-z4d5k" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-7m9dl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5l2ch" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-305966
helpers_test.go:243: (dbg) docker inspect newest-cni-305966:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0",
	        "Created": "2025-11-24T13:59:37.467773592Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 611598,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:00:05.618436087Z",
	            "FinishedAt": "2025-11-24T14:00:04.74109777Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/hostname",
	        "HostsPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/hosts",
	        "LogPath": "/var/lib/docker/containers/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0/d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0-json.log",
	        "Name": "/newest-cni-305966",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-305966:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-305966",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "d5c8bb04c9a8b101735edadb698799e5d6c945521b86d4301b6ea670806af7c0",
	                "LowerDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1808584ab797bfabf59b5eb852f6a41c74927bfca99095e0562af0f66a3fd777/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "newest-cni-305966",
	                "Source": "/var/lib/docker/volumes/newest-cni-305966/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-305966",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-305966",
	                "name.minikube.sigs.k8s.io": "newest-cni-305966",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "c055989e1384b9ede1ec9c4428c7779f3050a2fd2f5dc52bc67b27f0534a083a",
	            "SandboxKey": "/var/run/docker/netns/c055989e1384",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33463"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33464"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33467"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33465"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33466"
	                    }
	                ]
	            },
	            "Networks": {
	                "newest-cni-305966": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "b817ca8b27f62f3a3563cdb6a0b78b72617f6f646af87e5319081625ae16c4aa",
	                    "EndpointID": "c3c54093c946c0b399f26403aab56cef2e88ee826799fe72637182f3a8d21313",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "MacAddress": "9a:31:00:cf:35:c4",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-305966",
	                        "d5c8bb04c9a8"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966: exit status 2 (373.313466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-305966 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p newest-cni-305966 logs -n 25: (1.026706693s)
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ image   │ no-preload-495729 image list --format=json                                                                                                                                                                                                    │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p no-preload-495729 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p no-preload-495729                                                                                                                                                                                                                          │ no-preload-495729            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p disable-driver-mounts-036543                                                                                                                                                                                                               │ disable-driver-mounts-036543 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ image   │ old-k8s-version-551674 image list --format=json                                                                                                                                                                                               │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ pause   │ -p old-k8s-version-551674 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ delete  │ -p old-k8s-version-551674                                                                                                                                                                                                                     │ old-k8s-version-551674       │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 13:59 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio                                                                                                                             │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │                     │
	│ start   │ -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                      │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 13:59 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p newest-cni-305966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p newest-cni-305966 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ delete  │ -p kubernetes-upgrade-061040                                                                                                                                                                                                                  │ kubernetes-upgrade-061040    │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable dashboard -p newest-cni-305966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ start   │ -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-098307 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ start   │ -p auto-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                                       │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-098307 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-456660 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ stop    │ -p embed-certs-456660 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	│ image   │ newest-cni-305966 image list --format=json                                                                                                                                                                                                    │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │ 24 Nov 25 14:00 UTC │
	│ pause   │ -p newest-cni-305966 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-305966            │ jenkins │ v1.37.0 │ 24 Nov 25 14:00 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:00:06
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:00:06.639955  612215 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:00:06.640075  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640086  612215 out.go:374] Setting ErrFile to fd 2...
	I1124 14:00:06.640093  612215 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:00:06.640294  612215 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:00:06.640720  612215 out.go:368] Setting JSON to false
	I1124 14:00:06.641938  612215 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9754,"bootTime":1763983053,"procs":319,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:00:06.641994  612215 start.go:143] virtualization: kvm guest
	I1124 14:00:06.643737  612215 out.go:179] * [auto-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:00:06.644898  612215 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:00:06.644920  612215 notify.go:221] Checking for updates...
	I1124 14:00:06.647963  612215 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:00:06.648994  612215 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:06.650029  612215 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:00:06.651162  612215 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:00:06.652296  612215 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:00:06.653986  612215 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654117  612215 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654273  612215 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:06.654410  612215 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:00:06.678126  612215 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:00:06.678224  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.738636  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.729329331 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.738783  612215 docker.go:319] overlay module found
	I1124 14:00:06.740540  612215 out.go:179] * Using the docker driver based on user configuration
	I1124 14:00:06.741585  612215 start.go:309] selected driver: docker
	I1124 14:00:06.741600  612215 start.go:927] validating driver "docker" against <nil>
	I1124 14:00:06.741610  612215 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:00:06.742172  612215 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:00:06.795665  612215 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:65 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-24 14:00:06.785850803 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:00:06.795856  612215 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:00:06.796102  612215 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:00:06.797705  612215 out.go:179] * Using Docker driver with root privileges
	I1124 14:00:06.798793  612215 cni.go:84] Creating CNI manager for ""
	I1124 14:00:06.798864  612215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:06.798878  612215 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1124 14:00:06.798993  612215 start.go:353] cluster config:
	{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cri
o CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: Au
toPauseInterval:1m0s}
	I1124 14:00:06.800311  612215 out.go:179] * Starting "auto-165759" primary control-plane node in "auto-165759" cluster
	I1124 14:00:06.801369  612215 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:00:06.802406  612215 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:00:06.803315  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:06.803344  612215 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:00:06.803357  612215 cache.go:65] Caching tarball of preloaded images
	I1124 14:00:06.803391  612215 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:00:06.803462  612215 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:00:06.803474  612215 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:00:06.803574  612215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json ...
	I1124 14:00:06.803604  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json: {Name:mkafcf12b893460417f613b5956b061b507857b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:06.824406  612215 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:00:06.824437  612215 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:00:06.824457  612215 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:00:06.824499  612215 start.go:360] acquireMachinesLock for auto-165759: {Name:mke2972eaae0a3077df79966ba25decc1725d099 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:00:06.824601  612215 start.go:364] duration metric: took 79.565µs to acquireMachinesLock for "auto-165759"
	I1124 14:00:06.824623  612215 start.go:93] Provisioning new machine with config: &{Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:06.824701  612215 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:00:05.594487  611344 out.go:252] * Restarting existing docker container for "newest-cni-305966" ...
	I1124 14:00:05.594567  611344 cli_runner.go:164] Run: docker start newest-cni-305966
	I1124 14:00:06.042778  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:06.065571  611344 kic.go:430] container "newest-cni-305966" state is running.
	I1124 14:00:06.066040  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:06.086933  611344 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/config.json ...
	I1124 14:00:06.087112  611344 machine.go:94] provisionDockerMachine start ...
	I1124 14:00:06.087177  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:06.106639  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:06.106975  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:06.106997  611344 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:00:06.107857  611344 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:50272->127.0.0.1:33463: read: connection reset by peer
	I1124 14:00:09.250576  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-305966
	
	I1124 14:00:09.250635  611344 ubuntu.go:182] provisioning hostname "newest-cni-305966"
	I1124 14:00:09.250734  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.269624  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.269963  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.269989  611344 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-305966 && echo "newest-cni-305966" | sudo tee /etc/hostname
	I1124 14:00:09.423943  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-305966
	
	I1124 14:00:09.424043  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.442270  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.442573  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.442602  611344 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-305966' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-305966/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-305966' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:00:09.591505  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:00:09.591540  611344 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 14:00:09.591580  611344 ubuntu.go:190] setting up certificates
	I1124 14:00:09.591606  611344 provision.go:84] configureAuth start
	I1124 14:00:09.591678  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:09.608000  611344 provision.go:143] copyHostCerts
	I1124 14:00:09.608076  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 14:00:09.608092  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 14:00:09.608157  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 14:00:09.608283  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 14:00:09.608294  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 14:00:09.608331  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 14:00:09.608412  611344 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 14:00:09.608421  611344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 14:00:09.608458  611344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 14:00:09.608524  611344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.newest-cni-305966 san=[127.0.0.1 192.168.94.2 localhost minikube newest-cni-305966]
	I1124 14:00:09.795774  611344 provision.go:177] copyRemoteCerts
	I1124 14:00:09.795836  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:00:09.795882  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.815167  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:09.919696  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1124 14:00:09.937539  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:00:09.955598  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1124 14:00:09.972859  611344 provision.go:87] duration metric: took 381.236357ms to configureAuth
	I1124 14:00:09.972886  611344 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:00:09.973062  611344 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:09.973189  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:09.990437  611344 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:09.990698  611344 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33463 <nil> <nil>}
	I1124 14:00:09.990720  611344 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:00:06.826428  612215 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:00:06.826727  612215 start.go:159] libmachine.API.Create for "auto-165759" (driver="docker")
	I1124 14:00:06.826765  612215 client.go:173] LocalClient.Create starting
	I1124 14:00:06.826856  612215 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 14:00:06.826912  612215 main.go:143] libmachine: Decoding PEM data...
	I1124 14:00:06.826942  612215 main.go:143] libmachine: Parsing certificate...
	I1124 14:00:06.827035  612215 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 14:00:06.827066  612215 main.go:143] libmachine: Decoding PEM data...
	I1124 14:00:06.827081  612215 main.go:143] libmachine: Parsing certificate...
	I1124 14:00:06.827525  612215 cli_runner.go:164] Run: docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:00:06.846848  612215 cli_runner.go:211] docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:00:06.846935  612215 network_create.go:284] running [docker network inspect auto-165759] to gather additional debugging logs...
	I1124 14:00:06.846959  612215 cli_runner.go:164] Run: docker network inspect auto-165759
	W1124 14:00:06.864846  612215 cli_runner.go:211] docker network inspect auto-165759 returned with exit code 1
	I1124 14:00:06.864876  612215 network_create.go:287] error running [docker network inspect auto-165759]: docker network inspect auto-165759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network auto-165759 not found
	I1124 14:00:06.864905  612215 network_create.go:289] output of [docker network inspect auto-165759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network auto-165759 not found
	
	** /stderr **
	I1124 14:00:06.865080  612215 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:06.883080  612215 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 14:00:06.884122  612215 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 14:00:06.884636  612215 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 14:00:06.885473  612215 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ea31e0}
	I1124 14:00:06.885501  612215 network_create.go:124] attempt to create docker network auto-165759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:00:06.885543  612215 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=auto-165759 auto-165759
	I1124 14:00:06.935118  612215 network_create.go:108] docker network auto-165759 192.168.76.0/24 created
	I1124 14:00:06.935167  612215 kic.go:121] calculated static IP "192.168.76.2" for the "auto-165759" container
	I1124 14:00:06.935270  612215 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:00:06.954577  612215 cli_runner.go:164] Run: docker volume create auto-165759 --label name.minikube.sigs.k8s.io=auto-165759 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:00:06.973699  612215 oci.go:103] Successfully created a docker volume auto-165759
	I1124 14:00:06.973773  612215 cli_runner.go:164] Run: docker run --rm --name auto-165759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-165759 --entrypoint /usr/bin/test -v auto-165759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:00:07.370078  612215 oci.go:107] Successfully prepared a docker volume auto-165759
	I1124 14:00:07.370137  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:07.370149  612215 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:00:07.370222  612215 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:00:11.272332  611344 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:00:11.272357  611344 machine.go:97] duration metric: took 5.185230585s to provisionDockerMachine
	I1124 14:00:11.272370  611344 start.go:293] postStartSetup for "newest-cni-305966" (driver="docker")
	I1124 14:00:11.272381  611344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:00:11.272443  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:00:11.272503  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.289956  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.390985  611344 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:00:11.394433  611344 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:00:11.394458  611344 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:00:11.394469  611344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 14:00:11.394523  611344 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 14:00:11.394620  611344 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 14:00:11.394726  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:00:11.402145  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:11.419357  611344 start.go:296] duration metric: took 146.974942ms for postStartSetup
	I1124 14:00:11.419428  611344 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:00:11.419465  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.436677  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.536483  611344 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:00:11.541073  611344 fix.go:56] duration metric: took 5.96583724s for fixHost
	I1124 14:00:11.541099  611344 start.go:83] releasing machines lock for "newest-cni-305966", held for 5.965887021s
	I1124 14:00:11.541180  611344 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-305966
	I1124 14:00:11.558308  611344 ssh_runner.go:195] Run: cat /version.json
	I1124 14:00:11.558360  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.558426  611344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:00:11.558516  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:11.575458  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.576178  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:11.671465  611344 ssh_runner.go:195] Run: systemctl --version
	I1124 14:00:11.727243  611344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:00:11.759992  611344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:00:11.764307  611344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:00:11.764355  611344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:00:11.771850  611344 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1124 14:00:11.771871  611344 start.go:496] detecting cgroup driver to use...
	I1124 14:00:11.771914  611344 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 14:00:11.771962  611344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:00:11.785647  611344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:00:11.798863  611344 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:00:11.798925  611344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:00:11.815812  611344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:00:11.830569  611344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:00:11.926016  611344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:00:12.019537  611344 docker.go:234] disabling docker service ...
	I1124 14:00:12.019605  611344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:00:12.035763  611344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:00:12.048780  611344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:00:12.132824  611344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:00:12.228446  611344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:00:12.241308  611344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:00:12.255879  611344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:00:12.256027  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.265416  611344 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 14:00:12.265479  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.274340  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.282705  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.294801  611344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:00:12.303924  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.312281  611344 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.319919  611344 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:12.327933  611344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:00:12.334789  611344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:00:12.341614  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:12.455849  611344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:00:12.614518  611344 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:00:12.614593  611344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:00:12.619358  611344 start.go:564] Will wait 60s for crictl version
	I1124 14:00:12.619421  611344 ssh_runner.go:195] Run: which crictl
	I1124 14:00:12.623479  611344 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:00:12.648222  611344 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:00:12.648299  611344 ssh_runner.go:195] Run: crio --version
	I1124 14:00:12.677057  611344 ssh_runner.go:195] Run: crio --version
	I1124 14:00:12.706583  611344 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:00:12.707690  611344 cli_runner.go:164] Run: docker network inspect newest-cni-305966 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:12.727532  611344 ssh_runner.go:195] Run: grep 192.168.94.1	host.minikube.internal$ /etc/hosts
	I1124 14:00:12.731833  611344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.94.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:12.744130  611344 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1124 14:00:12.745252  611344 kubeadm.go:884] updating cluster {Name:newest-cni-305966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:
262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:00:12.745403  611344 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:12.745468  611344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:12.779183  611344 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:12.779205  611344 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:00:12.779259  611344 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:12.806328  611344 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:12.806352  611344 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:00:12.806360  611344 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.34.1 crio true true} ...
	I1124 14:00:12.806482  611344 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-305966 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:00:12.806572  611344 ssh_runner.go:195] Run: crio config
	I1124 14:00:12.854458  611344 cni.go:84] Creating CNI manager for ""
	I1124 14:00:12.854479  611344 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:12.854496  611344 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1124 14:00:12.854518  611344 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-305966 NodeName:newest-cni-305966 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:00:12.854638  611344 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-305966"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:00:12.854692  611344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:00:12.862451  611344 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:00:12.862531  611344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:00:12.869871  611344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (367 bytes)
	I1124 14:00:12.882752  611344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:00:12.897294  611344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2211 bytes)
	I1124 14:00:12.910632  611344 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:00:12.914337  611344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:12.923870  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:13.008239  611344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:13.038363  611344 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966 for IP: 192.168.94.2
	I1124 14:00:13.038387  611344 certs.go:195] generating shared ca certs ...
	I1124 14:00:13.038406  611344 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.038582  611344 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 14:00:13.038637  611344 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 14:00:13.038650  611344 certs.go:257] generating profile certs ...
	I1124 14:00:13.038762  611344 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/client.key
	I1124 14:00:13.038836  611344 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.key.707ba182
	I1124 14:00:13.038907  611344 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.key
	I1124 14:00:13.039052  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 14:00:13.039096  611344 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 14:00:13.039108  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:00:13.039141  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:00:13.039174  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:00:13.039205  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 14:00:13.039265  611344 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:13.040322  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:00:13.063001  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:00:13.085566  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:00:13.106652  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 14:00:13.127998  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1124 14:00:13.152231  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:00:13.171960  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:00:13.189487  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/newest-cni-305966/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1124 14:00:13.207088  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:00:13.224385  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 14:00:13.241576  611344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 14:00:13.259033  611344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:00:13.270834  611344 ssh_runner.go:195] Run: openssl version
	I1124 14:00:13.276982  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:00:13.284965  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.288421  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.288470  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:13.327459  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:00:13.336104  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 14:00:13.345465  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.350000  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.350052  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 14:00:13.394543  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 14:00:13.403198  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 14:00:13.413018  611344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.417473  611344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.417525  611344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 14:00:13.459516  611344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:00:13.467973  611344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:00:13.471733  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1124 14:00:13.515058  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1124 14:00:13.559630  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1124 14:00:13.610038  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1124 14:00:13.659104  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1124 14:00:13.720195  611344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1124 14:00:13.780380  611344 kubeadm.go:401] StartCluster: {Name:newest-cni-305966 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-305966 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262
144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:00:13.780515  611344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:00:13.780595  611344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:00:13.818024  611344 cri.go:89] found id: "d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc"
	I1124 14:00:13.818049  611344 cri.go:89] found id: "e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5"
	I1124 14:00:13.818054  611344 cri.go:89] found id: "dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5"
	I1124 14:00:13.818058  611344 cri.go:89] found id: "4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7"
	I1124 14:00:13.818062  611344 cri.go:89] found id: ""
	I1124 14:00:13.818103  611344 ssh_runner.go:195] Run: sudo runc list -f json
	W1124 14:00:13.833826  611344 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:00:13Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:00:13.833927  611344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:00:13.848758  611344 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1124 14:00:13.848777  611344 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1124 14:00:13.848821  611344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1124 14:00:13.860299  611344 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1124 14:00:13.861443  611344 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-305966" does not appear in /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:13.862296  611344 kubeconfig.go:62] /home/jenkins/minikube-integration/21932-348000/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-305966" cluster setting kubeconfig missing "newest-cni-305966" context setting]
	I1124 14:00:13.863520  611344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.865937  611344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1124 14:00:13.878751  611344 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.94.2
	I1124 14:00:13.878798  611344 kubeadm.go:602] duration metric: took 30.013528ms to restartPrimaryControlPlane
	I1124 14:00:13.878809  611344 kubeadm.go:403] duration metric: took 98.440465ms to StartCluster
	I1124 14:00:13.878826  611344 settings.go:142] acquiring lock: {Name:mk72c17792ecaf5f4aecae499df19a0043a48eea Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.878902  611344 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:00:13.881473  611344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/kubeconfig: {Name:mk6bbc2300c711b206dd5e2ef6fd04da250c6338 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:13.881833  611344 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:00:13.882038  611344 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1124 14:00:13.882140  611344 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-305966"
	I1124 14:00:13.882152  611344 addons.go:70] Setting dashboard=true in profile "newest-cni-305966"
	I1124 14:00:13.882187  611344 addons.go:239] Setting addon dashboard=true in "newest-cni-305966"
	W1124 14:00:13.882200  611344 addons.go:248] addon dashboard should already be in state true
	I1124 14:00:13.882233  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.882383  611344 config.go:182] Loaded profile config "newest-cni-305966": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:13.882737  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.882159  611344 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-305966"
	W1124 14:00:13.882915  611344 addons.go:248] addon storage-provisioner should already be in state true
	I1124 14:00:13.882963  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.882169  611344 addons.go:70] Setting default-storageclass=true in profile "newest-cni-305966"
	I1124 14:00:13.883343  611344 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-305966"
	I1124 14:00:13.883464  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.883734  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.887025  611344 out.go:179] * Verifying Kubernetes components...
	I1124 14:00:13.888266  611344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:13.912373  611344 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1124 14:00:13.913625  611344 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:13.913675  611344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1124 14:00:13.913739  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.917705  611344 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1124 14:00:13.918797  611344 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1124 14:00:13.920220  611344 addons.go:239] Setting addon default-storageclass=true in "newest-cni-305966"
	W1124 14:00:13.920278  611344 addons.go:248] addon default-storageclass should already be in state true
	I1124 14:00:13.920322  611344 host.go:66] Checking if "newest-cni-305966" exists ...
	I1124 14:00:13.920963  611344 cli_runner.go:164] Run: docker container inspect newest-cni-305966 --format={{.State.Status}}
	I1124 14:00:13.923142  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1124 14:00:13.923221  611344 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1124 14:00:13.923316  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.950962  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:13.966436  611344 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:13.966459  611344 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1124 14:00:13.966520  611344 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-305966
	I1124 14:00:13.971990  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:13.993363  611344 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33463 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/newest-cni-305966/id_rsa Username:docker}
	I1124 14:00:14.090261  611344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:14.103740  611344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1124 14:00:14.120848  611344 api_server.go:52] waiting for apiserver process to appear ...
	I1124 14:00:14.121027  611344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 14:00:14.136621  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1124 14:00:14.136649  611344 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1124 14:00:14.162367  611344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1124 14:00:14.174687  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1124 14:00:14.174713  611344 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1124 14:00:14.199721  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1124 14:00:14.199762  611344 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1124 14:00:14.218224  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1124 14:00:14.218251  611344 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1124 14:00:14.238865  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1124 14:00:14.238923  611344 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1124 14:00:14.258602  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1124 14:00:14.258621  611344 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1124 14:00:14.276787  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1124 14:00:14.276832  611344 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1124 14:00:14.294104  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1124 14:00:14.294129  611344 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1124 14:00:14.311561  611344 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:00:14.311583  611344 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1124 14:00:14.329534  611344 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1124 14:00:11.793592  612215 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v auto-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.423324856s)
	I1124 14:00:11.793625  612215 kic.go:203] duration metric: took 4.423471856s to extract preloaded images to volume ...
	W1124 14:00:11.793727  612215 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 14:00:11.793767  612215 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 14:00:11.793827  612215 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:00:11.851109  612215 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-165759 --name auto-165759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-165759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-165759 --network auto-165759 --ip 192.168.76.2 --volume auto-165759:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:00:12.147656  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Running}}
	I1124 14:00:12.170110  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Status}}
	I1124 14:00:12.187432  612215 cli_runner.go:164] Run: docker exec auto-165759 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:00:12.231829  612215 oci.go:144] the created container "auto-165759" has a running status.
	I1124 14:00:12.231859  612215 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa...
	I1124 14:00:12.425376  612215 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:00:12.458732  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Status}}
	I1124 14:00:12.478104  612215 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:00:12.478152  612215 kic_runner.go:114] Args: [docker exec --privileged auto-165759 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:00:12.524590  612215 cli_runner.go:164] Run: docker container inspect auto-165759 --format={{.State.Status}}
	I1124 14:00:12.544431  612215 machine.go:94] provisionDockerMachine start ...
	I1124 14:00:12.544551  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:12.563666  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:12.563954  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:12.563968  612215 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:00:12.711809  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-165759
	
	I1124 14:00:12.711840  612215 ubuntu.go:182] provisioning hostname "auto-165759"
	I1124 14:00:12.711961  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:12.730969  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:12.731259  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:12.731276  612215 main.go:143] libmachine: About to run SSH command:
	sudo hostname auto-165759 && echo "auto-165759" | sudo tee /etc/hostname
	I1124 14:00:12.888489  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: auto-165759
	
	I1124 14:00:12.888559  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:12.907210  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:12.907417  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:12.907439  612215 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-165759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-165759/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-165759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:00:13.053587  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:00:13.053614  612215 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 14:00:13.053639  612215 ubuntu.go:190] setting up certificates
	I1124 14:00:13.053661  612215 provision.go:84] configureAuth start
	I1124 14:00:13.053726  612215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-165759
	I1124 14:00:13.074607  612215 provision.go:143] copyHostCerts
	I1124 14:00:13.074673  612215 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 14:00:13.074687  612215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 14:00:13.074767  612215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 14:00:13.074990  612215 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 14:00:13.075007  612215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 14:00:13.075053  612215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 14:00:13.075161  612215 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 14:00:13.075174  612215 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 14:00:13.075209  612215 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 14:00:13.075301  612215 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.auto-165759 san=[127.0.0.1 192.168.76.2 auto-165759 localhost minikube]
	I1124 14:00:13.153839  612215 provision.go:177] copyRemoteCerts
	I1124 14:00:13.153910  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:00:13.153957  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.174492  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:13.278186  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:00:13.297125  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I1124 14:00:13.314074  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:00:13.333122  612215 provision.go:87] duration metric: took 279.448871ms to configureAuth
	I1124 14:00:13.333150  612215 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:00:13.333349  612215 config.go:182] Loaded profile config "auto-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:00:13.333470  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.352220  612215 main.go:143] libmachine: Using SSH client type: native
	I1124 14:00:13.352481  612215 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33468 <nil> <nil>}
	I1124 14:00:13.352499  612215 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:00:13.670530  612215 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:00:13.670559  612215 machine.go:97] duration metric: took 1.126102746s to provisionDockerMachine
	I1124 14:00:13.670573  612215 client.go:176] duration metric: took 6.843800736s to LocalClient.Create
	I1124 14:00:13.670587  612215 start.go:167] duration metric: took 6.843861689s to libmachine.API.Create "auto-165759"
	I1124 14:00:13.670601  612215 start.go:293] postStartSetup for "auto-165759" (driver="docker")
	I1124 14:00:13.670616  612215 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:00:13.670685  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:00:13.670738  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.698138  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:13.820996  612215 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:00:13.825414  612215 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:00:13.825444  612215 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:00:13.825456  612215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 14:00:13.825505  612215 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 14:00:13.825601  612215 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 14:00:13.825709  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:00:13.835417  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:13.863041  612215 start.go:296] duration metric: took 192.424811ms for postStartSetup
	I1124 14:00:13.863433  612215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-165759
	I1124 14:00:13.888619  612215 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/config.json ...
	I1124 14:00:13.888908  612215 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:00:13.888966  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:13.920902  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:14.050594  612215 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:00:14.063258  612215 start.go:128] duration metric: took 7.238541779s to createHost
	I1124 14:00:14.064625  612215 start.go:83] releasing machines lock for "auto-165759", held for 7.24000636s
	I1124 14:00:14.064709  612215 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-165759
	I1124 14:00:14.092621  612215 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:00:14.092724  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:14.093184  612215 ssh_runner.go:195] Run: cat /version.json
	I1124 14:00:14.093367  612215 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-165759
	I1124 14:00:14.122464  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:14.124086  612215 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33468 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/auto-165759/id_rsa Username:docker}
	I1124 14:00:14.251712  612215 ssh_runner.go:195] Run: systemctl --version
	I1124 14:00:14.338825  612215 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:00:14.391538  612215 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:00:14.397502  612215 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:00:14.397562  612215 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:00:14.434564  612215 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 14:00:14.434583  612215 start.go:496] detecting cgroup driver to use...
	I1124 14:00:14.434677  612215 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 14:00:14.434718  612215 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:00:14.455600  612215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:00:14.471560  612215 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:00:14.471606  612215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:00:14.492846  612215 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:00:14.516666  612215 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:00:14.628047  612215 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:00:14.762541  612215 docker.go:234] disabling docker service ...
	I1124 14:00:14.762611  612215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:00:14.788500  612215 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:00:14.804916  612215 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:00:14.914457  612215 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:00:15.017311  612215 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:00:15.033054  612215 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:00:15.050765  612215 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:00:15.050831  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.067688  612215 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 14:00:15.067758  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.078836  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.088784  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.098815  612215 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:00:15.107291  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.115984  612215 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.130612  612215 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:00:15.141075  612215 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:00:15.148885  612215 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:00:15.158081  612215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:15.284432  612215 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:00:15.458151  612215 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:00:15.458218  612215 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:00:15.462710  612215 start.go:564] Will wait 60s for crictl version
	I1124 14:00:15.462768  612215 ssh_runner.go:195] Run: which crictl
	I1124 14:00:15.466603  612215 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:00:15.514239  612215 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:00:15.514336  612215 ssh_runner.go:195] Run: crio --version
	I1124 14:00:15.571990  612215 ssh_runner.go:195] Run: crio --version
	I1124 14:00:15.624374  612215 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:00:16.229256  611344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.125432983s)
	I1124 14:00:16.229331  611344 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.10825543s)
	I1124 14:00:16.229372  611344 api_server.go:72] duration metric: took 2.347389168s to wait for apiserver process to appear ...
	I1124 14:00:16.229383  611344 api_server.go:88] waiting for apiserver healthz status ...
	I1124 14:00:16.229406  611344 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 14:00:16.229418  611344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.067014819s)
	I1124 14:00:16.229561  611344 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.899992686s)
	I1124 14:00:16.231089  611344 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-305966 addons enable metrics-server
	
	I1124 14:00:16.234148  611344 api_server.go:279] https://192.168.94.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1124 14:00:16.234173  611344 api_server.go:103] status: https://192.168.94.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1124 14:00:16.242061  611344 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1124 14:00:15.625433  612215 cli_runner.go:164] Run: docker network inspect auto-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:00:15.649305  612215 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:00:15.658707  612215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:15.671300  612215 kubeadm.go:884] updating cluster {Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:00:15.671506  612215 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:00:15.671650  612215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:15.711238  612215 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:15.711262  612215 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:00:15.711316  612215 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:00:15.746472  612215 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:00:15.746504  612215 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:00:15.746517  612215 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 14:00:15.746625  612215 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=auto-165759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1124 14:00:15.746716  612215 ssh_runner.go:195] Run: crio config
	I1124 14:00:15.807927  612215 cni.go:84] Creating CNI manager for ""
	I1124 14:00:15.807956  612215 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1124 14:00:15.807976  612215 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:00:15.808011  612215 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-165759 NodeName:auto-165759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/m
anifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:00:15.808179  612215 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-165759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:00:15.808253  612215 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:00:15.818192  612215 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:00:15.818241  612215 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:00:15.830005  612215 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes)
	I1124 14:00:15.846104  612215 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:00:15.868243  612215 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2207 bytes)
	I1124 14:00:15.884720  612215 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:00:15.889876  612215 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:00:15.901296  612215 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:00:16.018752  612215 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:00:16.043555  612215 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759 for IP: 192.168.76.2
	I1124 14:00:16.043582  612215 certs.go:195] generating shared ca certs ...
	I1124 14:00:16.043601  612215 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.043768  612215 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 14:00:16.043833  612215 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 14:00:16.043844  612215 certs.go:257] generating profile certs ...
	I1124 14:00:16.043975  612215 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.key
	I1124 14:00:16.043991  612215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.crt with IP's: []
	I1124 14:00:16.293620  612215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.crt ...
	I1124 14:00:16.293642  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.crt: {Name:mkfb66e3072b561f19728f93c659ef2477f49e7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.293788  612215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.key ...
	I1124 14:00:16.293799  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/client.key: {Name:mk8c2a3b70815de053602eb3df8d6e8f095c6713 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.293876  612215 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3
	I1124 14:00:16.293899  612215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:00:16.367561  612215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3 ...
	I1124 14:00:16.367580  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3: {Name:mk24f9dfe444fcae68d949bf267d3a252d85caa8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.367694  612215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3 ...
	I1124 14:00:16.367708  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3: {Name:mkc3bc46d4e34d08ca14cc3de17f7084014b9532 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.367781  612215 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt.3fd27ea3 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt
	I1124 14:00:16.367851  612215 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key.3fd27ea3 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key
	I1124 14:00:16.367919  612215 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key
	I1124 14:00:16.367934  612215 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt with IP's: []
	I1124 14:00:16.403104  612215 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt ...
	I1124 14:00:16.403121  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt: {Name:mkb8228759e3aa4c9c79f27c19473d40f82f511b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.403222  612215 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key ...
	I1124 14:00:16.403232  612215 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key: {Name:mkecd521882f11803819669731e8cd97a748e878 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:00:16.403397  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 14:00:16.403431  612215 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 14:00:16.403441  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:00:16.403464  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:00:16.403488  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:00:16.403512  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 14:00:16.403550  612215 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:00:16.404121  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:00:16.422094  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:00:16.441628  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:00:16.460620  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 14:00:16.479189  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1415 bytes)
	I1124 14:00:16.497037  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1124 14:00:16.517251  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:00:16.537668  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/auto-165759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:00:16.556912  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:00:16.577846  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 14:00:16.597943  612215 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 14:00:16.616957  612215 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:00:16.630882  612215 ssh_runner.go:195] Run: openssl version
	I1124 14:00:16.637881  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 14:00:16.243058  611344 addons.go:530] duration metric: took 2.361038584s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1124 14:00:16.730169  611344 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1124 14:00:16.734323  611344 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1124 14:00:16.735266  611344 api_server.go:141] control plane version: v1.34.1
	I1124 14:00:16.735292  611344 api_server.go:131] duration metric: took 505.902315ms to wait for apiserver health ...
	I1124 14:00:16.735304  611344 system_pods.go:43] waiting for kube-system pods to appear ...
	I1124 14:00:16.738854  611344 system_pods.go:59] 8 kube-system pods found
	I1124 14:00:16.738899  611344 system_pods.go:61] "coredns-66bc5c9577-z4d5k" [a925cbe1-f3d5-4821-a1bf-afc3d3ed1062] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:00:16.738911  611344 system_pods.go:61] "etcd-newest-cni-305966" [f603e9b8-89c7-4735-97bb-82e67ab5fccd] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1124 14:00:16.738920  611344 system_pods.go:61] "kindnet-7c2kd" [353470b5-271a-4976-9823-aae696867ae3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1124 14:00:16.738933  611344 system_pods.go:61] "kube-apiserver-newest-cni-305966" [4bbaeb61-1730-4352-815e-afc398299d99] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1124 14:00:16.738948  611344 system_pods.go:61] "kube-controller-manager-newest-cni-305966" [caf78e4b-40b4-467b-ade5-44a85043db3c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1124 14:00:16.738960  611344 system_pods.go:61] "kube-proxy-bwchr" [d1715fb6-8be2-493f-81c7-9e606cca9736] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1124 14:00:16.738971  611344 system_pods.go:61] "kube-scheduler-newest-cni-305966" [60d22afb-3af1-42aa-bce3-f4bc578e68ff] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1124 14:00:16.738982  611344 system_pods.go:61] "storage-provisioner" [408ded79-aabb-4020-867d-a7c3da485d56] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1124 14:00:16.738990  611344 system_pods.go:74] duration metric: took 3.678015ms to wait for pod list to return data ...
	I1124 14:00:16.739001  611344 default_sa.go:34] waiting for default service account to be created ...
	I1124 14:00:16.741118  611344 default_sa.go:45] found service account: "default"
	I1124 14:00:16.741137  611344 default_sa.go:55] duration metric: took 2.126918ms for default service account to be created ...
	I1124 14:00:16.741149  611344 kubeadm.go:587] duration metric: took 2.859166676s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1124 14:00:16.741170  611344 node_conditions.go:102] verifying NodePressure condition ...
	I1124 14:00:16.743473  611344 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1124 14:00:16.743502  611344 node_conditions.go:123] node cpu capacity is 8
	I1124 14:00:16.743518  611344 node_conditions.go:105] duration metric: took 2.342524ms to run NodePressure ...
	I1124 14:00:16.743532  611344 start.go:242] waiting for startup goroutines ...
	I1124 14:00:16.743547  611344 start.go:247] waiting for cluster config update ...
	I1124 14:00:16.743562  611344 start.go:256] writing updated cluster config ...
	I1124 14:00:16.743866  611344 ssh_runner.go:195] Run: rm -f paused
	I1124 14:00:16.793638  611344 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 14:00:16.795428  611344 out.go:179] * Done! kubectl is now configured to use "newest-cni-305966" cluster and "default" namespace by default
	I1124 14:00:16.647204  612215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 14:00:16.651320  612215 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 14:00:16.651367  612215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 14:00:16.698089  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 14:00:16.708151  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 14:00:16.717574  612215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 14:00:16.721678  612215 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 14:00:16.721743  612215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 14:00:16.766691  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:00:16.775921  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:00:16.784481  612215 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:16.788372  612215 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:16.788422  612215 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:00:16.830541  612215 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:00:16.839593  612215 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:00:16.843695  612215 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:00:16.843739  612215 kubeadm.go:401] StartCluster: {Name:auto-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:auto-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[]
APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetCl
ientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:00:16.843808  612215 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:00:16.843883  612215 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:00:16.875196  612215 cri.go:89] found id: ""
	I1124 14:00:16.875260  612215 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:00:16.884482  612215 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:00:16.894253  612215 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:00:16.894311  612215 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:00:16.902785  612215 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:00:16.902802  612215 kubeadm.go:158] found existing configuration files:
	
	I1124 14:00:16.902843  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:00:16.911622  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:00:16.911684  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:00:16.919913  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:00:16.927966  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:00:16.928022  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:00:16.935633  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:00:16.943371  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:00:16.943426  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:00:16.951152  612215 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:00:16.958295  612215 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:00:16.958340  612215 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:00:16.965184  612215 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:00:17.004327  612215 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:00:17.004411  612215 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:00:17.025121  612215 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:00:17.025207  612215 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 14:00:17.025261  612215 kubeadm.go:319] OS: Linux
	I1124 14:00:17.025361  612215 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:00:17.025463  612215 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:00:17.025518  612215 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:00:17.025589  612215 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:00:17.025664  612215 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:00:17.025744  612215 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:00:17.025834  612215 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:00:17.025910  612215 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 14:00:17.102227  612215 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:00:17.102576  612215 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:00:17.103416  612215 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:00:17.112118  612215 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1124 14:00:17.115392  612215 out.go:252]   - Generating certificates and keys ...
	I1124 14:00:17.115498  612215 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1124 14:00:17.115587  612215 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1124 14:00:17.320789  612215 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1124 14:00:18.013956  612215 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1124 14:00:18.305556  612215 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1124 14:00:18.550049  612215 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1124 14:00:18.604073  612215 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1124 14:00:18.604250  612215 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [auto-165759 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:00:18.687615  612215 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1124 14:00:18.687763  612215 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [auto-165759 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1124 14:00:18.741905  612215 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1124 14:00:18.881243  612215 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1124 14:00:19.003028  612215 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1124 14:00:19.003167  612215 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1124 14:00:19.142849  612215 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1124 14:00:19.532300  612215 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1124 14:00:19.735667  612215 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1124 14:00:19.908984  612215 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1124 14:00:19.930997  612215 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1124 14:00:19.931667  612215 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1124 14:00:19.935278  612215 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1124 14:00:19.937998  612215 out.go:252]   - Booting up control plane ...
	I1124 14:00:19.938110  612215 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:00:19.938223  612215 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:00:19.938333  612215 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:00:19.957141  612215 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:00:19.957315  612215 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:00:19.966728  612215 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:00:19.967317  612215 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:00:19.967447  612215 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:00:20.068953  612215 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:00:20.069143  612215 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:00:20.569618  612215 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 500.857333ms
	I1124 14:00:20.573789  612215 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:00:20.573932  612215 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:00:20.574052  612215 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:00:20.574122  612215 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	
	
	==> CRI-O <==
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.425158007Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.427588856Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=ec1b4782-db86-474f-900f-7256d40e75f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.428211088Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=066de6ea-c5be-4bcd-8403-ff846e5ea720 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.429141198Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.429696936Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.430046588Z" level=info msg="Ran pod sandbox 0b0017a7c8597dcb404ec14d596076c8a3ae3ce6bdaa5dcdd4f8e09285b18b4f with infra container: kube-system/kube-proxy-bwchr/POD" id=ec1b4782-db86-474f-900f-7256d40e75f3 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.430511638Z" level=info msg="Ran pod sandbox 7f71b60c0073d34b0ba987199ee21b8e15ce8ca0e28afe0070943950ab744991 with infra container: kube-system/kindnet-7c2kd/POD" id=066de6ea-c5be-4bcd-8403-ff846e5ea720 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.431090177Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=dcf07394-3260-43b2-8c08-0f4d0436de84 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.431356077Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=173572c7-5f36-418c-9ac9-a6da545065ee name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.43200066Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=65a87200-6ce5-48cf-8ab2-ec488485082a name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.432259244Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=d0b517c5-4129-4671-b3a6-a4fe6e17aa34 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.432947141Z" level=info msg="Creating container: kube-system/kube-proxy-bwchr/kube-proxy" id=b93727d9-d0a5-4bbd-ae24-ab881603c1a8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.433056247Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.433347205Z" level=info msg="Creating container: kube-system/kindnet-7c2kd/kindnet-cni" id=ab5cbbed-0c2c-4d62-8890-91a4ddfaacac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.433432874Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.438503169Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.438924898Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.439025875Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.439385887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.465472433Z" level=info msg="Created container 93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341: kube-system/kindnet-7c2kd/kindnet-cni" id=ab5cbbed-0c2c-4d62-8890-91a4ddfaacac name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.466057702Z" level=info msg="Starting container: 93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341" id=7479948e-450e-448f-bb37-d5958a27f492 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.46780103Z" level=info msg="Started container" PID=1050 containerID=93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341 description=kube-system/kindnet-7c2kd/kindnet-cni id=7479948e-450e-448f-bb37-d5958a27f492 name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f71b60c0073d34b0ba987199ee21b8e15ce8ca0e28afe0070943950ab744991
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.468749822Z" level=info msg="Created container 960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904: kube-system/kube-proxy-bwchr/kube-proxy" id=b93727d9-d0a5-4bbd-ae24-ab881603c1a8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.469268218Z" level=info msg="Starting container: 960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904" id=fa37bb19-9405-4a35-86aa-147ddc2efbd5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:16 newest-cni-305966 crio[521]: time="2025-11-24T14:00:16.471804221Z" level=info msg="Started container" PID=1051 containerID=960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904 description=kube-system/kube-proxy-bwchr/kube-proxy id=fa37bb19-9405-4a35-86aa-147ddc2efbd5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0b0017a7c8597dcb404ec14d596076c8a3ae3ce6bdaa5dcdd4f8e09285b18b4f
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	93ca1d74735d6       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   7f71b60c0073d       kindnet-7c2kd                               kube-system
	960f84874b52e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   0b0017a7c8597       kube-proxy-bwchr                            kube-system
	d89776e70ad5c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   1cc2842573d61       kube-apiserver-newest-cni-305966            kube-system
	e119d99662f18       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   f8e2e5354e4ea       etcd-newest-cni-305966                      kube-system
	dfa15d58172bf       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   e1ff60f922a27       kube-scheduler-newest-cni-305966            kube-system
	4ad843941afeb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   ced98c89db1d8       kube-controller-manager-newest-cni-305966   kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-305966
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-305966
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=newest-cni-305966
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_54_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:51 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-305966
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:00:15 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Mon, 24 Nov 2025 14:00:15 +0000   Mon, 24 Nov 2025 13:59:49 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    newest-cni-305966
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                ecbd5efe-848f-483d-9396-2b651bf1384a
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-305966                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-7c2kd                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-newest-cni-305966             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-newest-cni-305966    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-bwchr                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-newest-cni-305966             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 5s    kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28s   kubelet          Node newest-cni-305966 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28s   kubelet          Node newest-cni-305966 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28s   kubelet          Node newest-cni-305966 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node newest-cni-305966 event: Registered Node newest-cni-305966 in Controller
	  Normal  RegisteredNode           4s    node-controller  Node newest-cni-305966 event: Registered Node newest-cni-305966 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [e119d99662f18807314c108742a912814e08072ba8c225bbfef5c4cb0089eaf5] <==
	{"level":"warn","ts":"2025-11-24T14:00:14.752095Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38932","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.761148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.772866Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.789557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:38978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.799241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39002","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.806459Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.812840Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39042","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.820060Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39074","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.827294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39080","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.835622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39096","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.847934Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.858657Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.866798Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.873772Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39160","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.880425Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.887245Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39202","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.894148Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39214","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.900198Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.908176Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39258","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.915445Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39264","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.921976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39280","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.936616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39290","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.949029Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:14.969806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39326","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:15.014971Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:39342","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 14:00:22 up  2:42,  0 user,  load average: 2.85, 2.93, 2.05
	Linux newest-cni-305966 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [93ca1d74735d61de68ecdb39646c7d3ef72eef0bd011bb61a209f4409d96a341] <==
	I1124 14:00:16.699736       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:16.766644       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1124 14:00:16.766823       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:16.766852       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:16.766880       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:16.899747       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:16.899797       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:16.899810       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:16.998773       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:00:17.300005       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:00:17.300103       1 metrics.go:72] Registering metrics
	I1124 14:00:17.300196       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [d89776e70ad5c30b82fb56e369bc8ca9c79f468ce35aa2d87e210fc8e246b7bc] <==
	I1124 14:00:15.603525       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:00:15.603597       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:00:15.603971       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	E1124 14:00:15.610718       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1124 14:00:15.611692       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1124 14:00:15.611772       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:00:15.611797       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:00:15.611805       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:00:15.611812       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:15.619322       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:00:15.638258       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:00:15.640260       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:15.968793       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:00:15.996073       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:00:16.019245       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:16.034402       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:16.050210       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:00:16.092273       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.32.19"}
	I1124 14:00:16.102678       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.102.153.146"}
	I1124 14:00:16.501809       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:18.926832       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:18.926869       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:19.328225       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1124 14:00:19.427147       1 controller.go:667] quota admission added evaluator for: endpoints
	I1124 14:00:19.527224       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4ad843941afebe46506b73f43e719ede87774017b0d3ad3b20355a54904afbf7] <==
	I1124 14:00:18.888916       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:00:18.890567       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1124 14:00:18.900850       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:00:18.900924       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:00:18.900962       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:00:18.900976       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:00:18.900984       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:00:18.902205       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:00:18.913472       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1124 14:00:18.922926       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:00:18.922952       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1124 14:00:18.922962       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1124 14:00:18.923055       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1124 14:00:18.924155       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:00:18.924239       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:00:18.924280       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:00:18.926494       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:00:18.929107       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:18.929955       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:00:18.932558       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1124 14:00:18.938850       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:00:18.938864       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:00:18.938872       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:00:18.940964       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1124 14:00:18.948059       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [960f84874b52e42fe78f3a9cba42e0e7801828964c0d30c9a28aa87a5a060904] <==
	I1124 14:00:16.507502       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:00:16.568112       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:16.668871       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:16.668927       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1124 14:00:16.669054       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:16.689910       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:16.689976       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:16.694935       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:16.695356       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:16.695384       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:16.697358       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:16.697371       1 config.go:200] "Starting service config controller"
	I1124 14:00:16.697396       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:16.697409       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:16.697416       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:16.697398       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:16.697624       1 config.go:309] "Starting node config controller"
	I1124 14:00:16.697638       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:16.697646       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:16.797577       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:16.797605       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1124 14:00:16.797586       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [dfa15d58172bfe1fb0c35b29157a03f11ec4135cd785419188b5de0213bc9aa5] <==
	I1124 14:00:14.903162       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:00:15.908652       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:00:15.908702       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:15.913863       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:00:15.913866       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:15.913926       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:15.913940       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:15.913963       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:15.913925       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:00:15.914354       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:00:15.914388       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:00:16.015162       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1124 14:00:16.015295       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:16.015356       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.620212     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.632024     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-305966\" already exists" pod="kube-system/kube-scheduler-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.632075     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.638637     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-305966\" already exists" pod="kube-system/etcd-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.638666     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.642547     677 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.642911     677 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.643163     677 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.644306     677 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.645942     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-305966\" already exists" pod="kube-system/kube-apiserver-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.645970     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.661852     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-305966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: I1124 14:00:15.972751     677 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:15 newest-cni-305966 kubelet[677]: E1124 14:00:15.981170     677 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-305966\" already exists" pod="kube-system/kube-controller-manager-newest-cni-305966"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.117305     677 apiserver.go:52] "Watching apiserver"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.121366     677 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.144956     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-lib-modules\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145021     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1715fb6-8be2-493f-81c7-9e606cca9736-xtables-lock\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145049     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-cni-cfg\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145112     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/353470b5-271a-4976-9823-aae696867ae3-xtables-lock\") pod \"kindnet-7c2kd\" (UID: \"353470b5-271a-4976-9823-aae696867ae3\") " pod="kube-system/kindnet-7c2kd"
	Nov 24 14:00:16 newest-cni-305966 kubelet[677]: I1124 14:00:16.145173     677 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1715fb6-8be2-493f-81c7-9e606cca9736-lib-modules\") pod \"kube-proxy-bwchr\" (UID: \"d1715fb6-8be2-493f-81c7-9e606cca9736\") " pod="kube-system/kube-proxy-bwchr"
	Nov 24 14:00:17 newest-cni-305966 kubelet[677]: I1124 14:00:17.812419     677 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 14:00:17 newest-cni-305966 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:00:17 newest-cni-305966 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:00:17 newest-cni-305966 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305966 -n newest-cni-305966
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-305966 -n newest-cni-305966: exit status 2 (367.842782ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-305966 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch: exit status 1 (58.853924ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-z4d5k" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-7m9dl" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-5l2ch" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-305966 describe pod coredns-66bc5c9577-z4d5k storage-provisioner dashboard-metrics-scraper-6ffb444bf9-7m9dl kubernetes-dashboard-855c9754f9-5l2ch: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.84s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-098307 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-098307 --alsologtostderr -v=1: exit status 80 (2.567163224s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-098307 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:01:21.884134  634250 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:21.884470  634250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:21.884483  634250 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:21.884491  634250 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:21.884788  634250 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:01:21.885153  634250 out.go:368] Setting JSON to false
	I1124 14:01:21.885183  634250 mustload.go:66] Loading cluster: default-k8s-diff-port-098307
	I1124 14:01:21.885703  634250 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:21.886346  634250 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-098307 --format={{.State.Status}}
	I1124 14:01:21.911127  634250 host.go:66] Checking if "default-k8s-diff-port-098307" exists ...
	I1124 14:01:21.911455  634250 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:21.993444  634250 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:87 SystemTime:2025-11-24 14:01:21.979831812 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:21.994370  634250 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-098307 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:01:21.998024  634250 out.go:179] * Pausing node default-k8s-diff-port-098307 ... 
	I1124 14:01:21.999302  634250 host.go:66] Checking if "default-k8s-diff-port-098307" exists ...
	I1124 14:01:21.999619  634250 ssh_runner.go:195] Run: systemctl --version
	I1124 14:01:21.999693  634250 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-098307
	I1124 14:01:22.024525  634250 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33473 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/default-k8s-diff-port-098307/id_rsa Username:docker}
	I1124 14:01:22.140191  634250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:22.160132  634250 pause.go:52] kubelet running: true
	I1124 14:01:22.160211  634250 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:22.334146  634250 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:22.334243  634250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:22.425275  634250 cri.go:89] found id: "182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed"
	I1124 14:01:22.425303  634250 cri.go:89] found id: "033562f6653c9ff0552e30ef3a659624de1155d4ab2ae6d29b2138a7aaf7c061"
	I1124 14:01:22.425310  634250 cri.go:89] found id: "29b2d0fa2f290d7f2973b915d97506754f7e042d52e71847abc829f4c5d59d98"
	I1124 14:01:22.425314  634250 cri.go:89] found id: "da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26"
	I1124 14:01:22.425319  634250 cri.go:89] found id: "2e2ac078f3f0b7db0a740d8374ba34d253a8790349b72321d9682db61b4abb2a"
	I1124 14:01:22.425323  634250 cri.go:89] found id: "3e655d65400a54487d785f394fe12c8ced15c6f5d18334990e13f76babe2a555"
	I1124 14:01:22.425327  634250 cri.go:89] found id: "b39f44030d6ada4f06ee562f173b210839aa14cc4257bcab4e97acb016cd5680"
	I1124 14:01:22.425331  634250 cri.go:89] found id: "efcb2dcd558320e34a3d25837fb159e3f4dd2ff10a8fd5ce8d21450a8a027300"
	I1124 14:01:22.425335  634250 cri.go:89] found id: "d21ba0b0da991a1e74ea43fb065cb9766681c65ea8b443a6386de6f40572612f"
	I1124 14:01:22.425351  634250 cri.go:89] found id: "e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	I1124 14:01:22.425356  634250 cri.go:89] found id: "90e245ba4cca96d1989f2bb7458706293f79c44d091efd4a6f13c934fa98aac6"
	I1124 14:01:22.425361  634250 cri.go:89] found id: ""
	I1124 14:01:22.425415  634250 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:22.438909  634250 retry.go:31] will retry after 163.087995ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:22Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:01:22.602145  634250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:22.618415  634250 pause.go:52] kubelet running: false
	I1124 14:01:22.618489  634250 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:22.804498  634250 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:22.804589  634250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:22.874802  634250 cri.go:89] found id: "182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed"
	I1124 14:01:22.874825  634250 cri.go:89] found id: "033562f6653c9ff0552e30ef3a659624de1155d4ab2ae6d29b2138a7aaf7c061"
	I1124 14:01:22.874830  634250 cri.go:89] found id: "29b2d0fa2f290d7f2973b915d97506754f7e042d52e71847abc829f4c5d59d98"
	I1124 14:01:22.874833  634250 cri.go:89] found id: "da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26"
	I1124 14:01:22.874836  634250 cri.go:89] found id: "2e2ac078f3f0b7db0a740d8374ba34d253a8790349b72321d9682db61b4abb2a"
	I1124 14:01:22.874839  634250 cri.go:89] found id: "3e655d65400a54487d785f394fe12c8ced15c6f5d18334990e13f76babe2a555"
	I1124 14:01:22.874842  634250 cri.go:89] found id: "b39f44030d6ada4f06ee562f173b210839aa14cc4257bcab4e97acb016cd5680"
	I1124 14:01:22.874844  634250 cri.go:89] found id: "efcb2dcd558320e34a3d25837fb159e3f4dd2ff10a8fd5ce8d21450a8a027300"
	I1124 14:01:22.874847  634250 cri.go:89] found id: "d21ba0b0da991a1e74ea43fb065cb9766681c65ea8b443a6386de6f40572612f"
	I1124 14:01:22.874852  634250 cri.go:89] found id: "e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	I1124 14:01:22.874855  634250 cri.go:89] found id: "90e245ba4cca96d1989f2bb7458706293f79c44d091efd4a6f13c934fa98aac6"
	I1124 14:01:22.874857  634250 cri.go:89] found id: ""
	I1124 14:01:22.874904  634250 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:22.887013  634250 retry.go:31] will retry after 557.051653ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:22Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:01:23.444731  634250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:23.457532  634250 pause.go:52] kubelet running: false
	I1124 14:01:23.457592  634250 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:23.600153  634250 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:23.600246  634250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:23.670219  634250 cri.go:89] found id: "182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed"
	I1124 14:01:23.670247  634250 cri.go:89] found id: "033562f6653c9ff0552e30ef3a659624de1155d4ab2ae6d29b2138a7aaf7c061"
	I1124 14:01:23.670253  634250 cri.go:89] found id: "29b2d0fa2f290d7f2973b915d97506754f7e042d52e71847abc829f4c5d59d98"
	I1124 14:01:23.670258  634250 cri.go:89] found id: "da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26"
	I1124 14:01:23.670262  634250 cri.go:89] found id: "2e2ac078f3f0b7db0a740d8374ba34d253a8790349b72321d9682db61b4abb2a"
	I1124 14:01:23.670267  634250 cri.go:89] found id: "3e655d65400a54487d785f394fe12c8ced15c6f5d18334990e13f76babe2a555"
	I1124 14:01:23.670272  634250 cri.go:89] found id: "b39f44030d6ada4f06ee562f173b210839aa14cc4257bcab4e97acb016cd5680"
	I1124 14:01:23.670277  634250 cri.go:89] found id: "efcb2dcd558320e34a3d25837fb159e3f4dd2ff10a8fd5ce8d21450a8a027300"
	I1124 14:01:23.670282  634250 cri.go:89] found id: "d21ba0b0da991a1e74ea43fb065cb9766681c65ea8b443a6386de6f40572612f"
	I1124 14:01:23.670290  634250 cri.go:89] found id: "e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	I1124 14:01:23.670296  634250 cri.go:89] found id: "90e245ba4cca96d1989f2bb7458706293f79c44d091efd4a6f13c934fa98aac6"
	I1124 14:01:23.670299  634250 cri.go:89] found id: ""
	I1124 14:01:23.670331  634250 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:23.683173  634250 retry.go:31] will retry after 418.834143ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:23Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:01:24.102598  634250 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:24.115372  634250 pause.go:52] kubelet running: false
	I1124 14:01:24.115438  634250 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:24.278410  634250 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:24.278533  634250 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:24.351816  634250 cri.go:89] found id: "182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed"
	I1124 14:01:24.351849  634250 cri.go:89] found id: "033562f6653c9ff0552e30ef3a659624de1155d4ab2ae6d29b2138a7aaf7c061"
	I1124 14:01:24.351856  634250 cri.go:89] found id: "29b2d0fa2f290d7f2973b915d97506754f7e042d52e71847abc829f4c5d59d98"
	I1124 14:01:24.351860  634250 cri.go:89] found id: "da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26"
	I1124 14:01:24.351865  634250 cri.go:89] found id: "2e2ac078f3f0b7db0a740d8374ba34d253a8790349b72321d9682db61b4abb2a"
	I1124 14:01:24.351871  634250 cri.go:89] found id: "3e655d65400a54487d785f394fe12c8ced15c6f5d18334990e13f76babe2a555"
	I1124 14:01:24.351876  634250 cri.go:89] found id: "b39f44030d6ada4f06ee562f173b210839aa14cc4257bcab4e97acb016cd5680"
	I1124 14:01:24.351881  634250 cri.go:89] found id: "efcb2dcd558320e34a3d25837fb159e3f4dd2ff10a8fd5ce8d21450a8a027300"
	I1124 14:01:24.351885  634250 cri.go:89] found id: "d21ba0b0da991a1e74ea43fb065cb9766681c65ea8b443a6386de6f40572612f"
	I1124 14:01:24.351916  634250 cri.go:89] found id: "e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	I1124 14:01:24.351921  634250 cri.go:89] found id: "90e245ba4cca96d1989f2bb7458706293f79c44d091efd4a6f13c934fa98aac6"
	I1124 14:01:24.351926  634250 cri.go:89] found id: ""
	I1124 14:01:24.351984  634250 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:24.365626  634250 out.go:203] 
	W1124 14:01:24.366657  634250 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:24Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:01:24.366674  634250 out.go:285] * 
	* 
	W1124 14:01:24.371401  634250 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:01:24.372602  634250 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-098307 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-098307
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-098307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948",
	        "Created": "2025-11-24T13:59:20.659772726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620062,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:00:25.531866474Z",
	            "FinishedAt": "2025-11-24T14:00:24.403959202Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/hostname",
	        "HostsPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/hosts",
	        "LogPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948-json.log",
	        "Name": "/default-k8s-diff-port-098307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-098307:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-098307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948",
	                "LowerDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-098307",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-098307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-098307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-098307",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-098307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b523f902f8a37efbb23d47c6e81b0e0312774b7e0196cd1dac0e5afc2462b88e",
	            "SandboxKey": "/var/run/docker/netns/b523f902f8a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-098307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c6a8563f604dbd2ac02c075d8fe7a50789753dd9a0a4910f48e583fa79e5934",
	                    "EndpointID": "04ac5e76f1515495779cdeff55b85de9ad6f5c4af463490697f9ceece2996df6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:7c:9d:e4:75:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-098307",
	                        "bd0eb14a7bb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307: exit status 2 (328.616752ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-098307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-098307 logs -n 25: (1.057984972s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-165759 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat docker --no-pager                                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/docker/daemon.json                                                                                        │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo docker system info                                                                                                 │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cri-dockerd --version                                                                                              │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat containerd --no-pager                                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/containerd/config.toml                                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo containerd config dump                                                                                             │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl cat crio --no-pager                                                                                      │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo crio config                                                                                                        │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ delete  │ -p auto-165759                                                                                                                         │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p calico-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p kindnet-165759 pgrep -a kubelet                                                                                                     │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ default-k8s-diff-port-098307 image list --format=json                                                                                  │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ pause   │ -p default-k8s-diff-port-098307 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:01:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:01:16.115638  633029 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:16.115907  633029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:16.115917  633029 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:16.115921  633029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:16.116170  633029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:01:16.116657  633029 out.go:368] Setting JSON to false
	I1124 14:01:16.117946  633029 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9823,"bootTime":1763983053,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:01:16.118005  633029 start.go:143] virtualization: kvm guest
	I1124 14:01:16.119742  633029 out.go:179] * [calico-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:01:16.120856  633029 notify.go:221] Checking for updates...
	I1124 14:01:16.120871  633029 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:01:16.121940  633029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:01:16.123489  633029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:01:16.124521  633029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:01:16.125539  633029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:01:16.126544  633029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:01:16.127898  633029 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:16.128015  633029 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:16.128136  633029 config.go:182] Loaded profile config "kindnet-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:16.128282  633029 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:01:16.153298  633029 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:01:16.153395  633029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:16.211093  633029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 14:01:16.20085004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:16.211264  633029 docker.go:319] overlay module found
	I1124 14:01:16.213611  633029 out.go:179] * Using the docker driver based on user configuration
	I1124 14:01:16.214810  633029 start.go:309] selected driver: docker
	I1124 14:01:16.214825  633029 start.go:927] validating driver "docker" against <nil>
	I1124 14:01:16.214837  633029 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:01:16.215486  633029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:16.271073  633029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 14:01:16.261659875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:16.271226  633029 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:01:16.271427  633029 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:01:16.272977  633029 out.go:179] * Using Docker driver with root privileges
	I1124 14:01:16.274116  633029 cni.go:84] Creating CNI manager for "calico"
	I1124 14:01:16.274139  633029 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1124 14:01:16.274205  633029 start.go:353] cluster config:
	{Name:calico-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:01:16.275398  633029 out.go:179] * Starting "calico-165759" primary control-plane node in "calico-165759" cluster
	I1124 14:01:16.276382  633029 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:01:16.277469  633029 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:01:16.278637  633029 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:16.278673  633029 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:01:16.278682  633029 cache.go:65] Caching tarball of preloaded images
	I1124 14:01:16.278718  633029 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:01:16.278758  633029 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:01:16.278770  633029 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:01:16.278855  633029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/config.json ...
	I1124 14:01:16.278873  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/config.json: {Name:mkb6a17fc4f60ad81050e57901dc35443b8c60da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:16.297532  633029 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:01:16.297552  633029 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:01:16.297567  633029 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:01:16.297599  633029 start.go:360] acquireMachinesLock for calico-165759: {Name:mk78f259fca7d2ac6d5e16a346a46567b2a44671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:01:16.297685  633029 start.go:364] duration metric: took 71.402µs to acquireMachinesLock for "calico-165759"
	I1124 14:01:16.297706  633029 start.go:93] Provisioning new machine with config: &{Name:calico-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:01:16.297769  633029 start.go:125] createHost starting for "" (driver="docker")
	W1124 14:01:13.081631  622437 pod_ready.go:104] pod "coredns-66bc5c9577-nnp2c" is not "Ready", error: <nil>
	W1124 14:01:15.581673  622437 pod_ready.go:104] pod "coredns-66bc5c9577-nnp2c" is not "Ready", error: <nil>
	W1124 14:01:17.582018  622437 pod_ready.go:104] pod "coredns-66bc5c9577-nnp2c" is not "Ready", error: <nil>
	I1124 14:01:19.082331  622437 pod_ready.go:94] pod "coredns-66bc5c9577-nnp2c" is "Ready"
	I1124 14:01:19.082362  622437 pod_ready.go:86] duration metric: took 36.005745038s for pod "coredns-66bc5c9577-nnp2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.084829  622437 pod_ready.go:83] waiting for pod "etcd-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.088356  622437 pod_ready.go:94] pod "etcd-embed-certs-456660" is "Ready"
	I1124 14:01:19.088382  622437 pod_ready.go:86] duration metric: took 3.529566ms for pod "etcd-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.090320  622437 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.093678  622437 pod_ready.go:94] pod "kube-apiserver-embed-certs-456660" is "Ready"
	I1124 14:01:19.093697  622437 pod_ready.go:86] duration metric: took 3.35724ms for pod "kube-apiserver-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.095478  622437 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.281072  622437 pod_ready.go:94] pod "kube-controller-manager-embed-certs-456660" is "Ready"
	I1124 14:01:19.281098  622437 pod_ready.go:86] duration metric: took 185.602415ms for pod "kube-controller-manager-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.481648  622437 pod_ready.go:83] waiting for pod "kube-proxy-k5bxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.880455  622437 pod_ready.go:94] pod "kube-proxy-k5bxk" is "Ready"
	I1124 14:01:19.880488  622437 pod_ready.go:86] duration metric: took 398.805191ms for pod "kube-proxy-k5bxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:20.080090  622437 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:20.480860  622437 pod_ready.go:94] pod "kube-scheduler-embed-certs-456660" is "Ready"
	I1124 14:01:20.480885  622437 pod_ready.go:86] duration metric: took 400.773584ms for pod "kube-scheduler-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:20.480917  622437 pod_ready.go:40] duration metric: took 37.40753881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:01:20.524827  622437 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 14:01:20.551088  622437 out.go:179] * Done! kubectl is now configured to use "embed-certs-456660" cluster and "default" namespace by default
	I1124 14:01:16.299325  633029 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:01:16.299579  633029 start.go:159] libmachine.API.Create for "calico-165759" (driver="docker")
	I1124 14:01:16.299608  633029 client.go:173] LocalClient.Create starting
	I1124 14:01:16.299678  633029 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 14:01:16.299712  633029 main.go:143] libmachine: Decoding PEM data...
	I1124 14:01:16.299731  633029 main.go:143] libmachine: Parsing certificate...
	I1124 14:01:16.299800  633029 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 14:01:16.299821  633029 main.go:143] libmachine: Decoding PEM data...
	I1124 14:01:16.299832  633029 main.go:143] libmachine: Parsing certificate...
	I1124 14:01:16.300193  633029 cli_runner.go:164] Run: docker network inspect calico-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:01:16.315317  633029 cli_runner.go:211] docker network inspect calico-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:01:16.315377  633029 network_create.go:284] running [docker network inspect calico-165759] to gather additional debugging logs...
	I1124 14:01:16.315393  633029 cli_runner.go:164] Run: docker network inspect calico-165759
	W1124 14:01:16.331245  633029 cli_runner.go:211] docker network inspect calico-165759 returned with exit code 1
	I1124 14:01:16.331267  633029 network_create.go:287] error running [docker network inspect calico-165759]: docker network inspect calico-165759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-165759 not found
	I1124 14:01:16.331277  633029 network_create.go:289] output of [docker network inspect calico-165759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-165759 not found
	
	** /stderr **
	I1124 14:01:16.331392  633029 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:01:16.348282  633029 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 14:01:16.349088  633029 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 14:01:16.349551  633029 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 14:01:16.350374  633029 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e78c80}
	I1124 14:01:16.350408  633029 network_create.go:124] attempt to create docker network calico-165759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:01:16.350449  633029 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-165759 calico-165759
	I1124 14:01:16.397295  633029 network_create.go:108] docker network calico-165759 192.168.76.0/24 created
	I1124 14:01:16.397325  633029 kic.go:121] calculated static IP "192.168.76.2" for the "calico-165759" container
	I1124 14:01:16.397375  633029 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:01:16.413683  633029 cli_runner.go:164] Run: docker volume create calico-165759 --label name.minikube.sigs.k8s.io=calico-165759 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:01:16.431631  633029 oci.go:103] Successfully created a docker volume calico-165759
	I1124 14:01:16.431698  633029 cli_runner.go:164] Run: docker run --rm --name calico-165759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-165759 --entrypoint /usr/bin/test -v calico-165759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:01:16.811395  633029 oci.go:107] Successfully prepared a docker volume calico-165759
	I1124 14:01:16.811479  633029 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:16.811494  633029 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:01:16.811551  633029 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	
	
	==> CRI-O <==
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.468869768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.469097347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/70c39b1e4e9c9e316551f51b4fa40d3321778f14fc44da1eced0d904ff128cfe/merged/etc/passwd: no such file or directory"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.469133675Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/70c39b1e4e9c9e316551f51b4fa40d3321778f14fc44da1eced0d904ff128cfe/merged/etc/group: no such file or directory"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.46942714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.493154376Z" level=info msg="Created container 182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed: kube-system/storage-provisioner/storage-provisioner" id=1a227bf7-0536-494f-a5a3-66b428206411 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.493781608Z" level=info msg="Starting container: 182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed" id=6871be2a-9c32-4539-a67a-24edf75fdecc name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.495877177Z" level=info msg="Started container" PID=1687 containerID=182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed description=kube-system/storage-provisioner/storage-provisioner id=6871be2a-9c32-4539-a67a-24edf75fdecc name=/runtime.v1.RuntimeService/StartContainer sandboxID=78c03823911a20545998a1e0e93ab9b965b376fdf807276e294e6cb0d41a1b3b
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.098318413Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.103150738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.103179412Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.103197237Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.106650924Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.106675843Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.106691561Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.110353177Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.110378995Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.110399822Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.114108959Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.114133989Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.114155709Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.117530862Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.117553769Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.117573316Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.121262014Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.121279425Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	182a4de9d25bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           17 seconds ago      Running             storage-provisioner         1                   78c03823911a2       storage-provisioner                                    kube-system
	e39fdc894045a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   efddaf5553512       dashboard-metrics-scraper-6ffb444bf9-l5qpb             kubernetes-dashboard
	90e245ba4cca9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   37 seconds ago      Running             kubernetes-dashboard        0                   c307ff22b87e4       kubernetes-dashboard-855c9754f9-wqmwj                  kubernetes-dashboard
	033562f6653c9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           48 seconds ago      Running             coredns                     0                   ae16fb75d1fb0       coredns-66bc5c9577-kzf7b                               kube-system
	e1d48b30c6c1f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           48 seconds ago      Running             busybox                     1                   aabf593d2af41       busybox                                                default
	29b2d0fa2f290       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           48 seconds ago      Running             kindnet-cni                 0                   e91d1617d722e       kindnet-qswz4                                          kube-system
	da2489f88ad25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           48 seconds ago      Exited              storage-provisioner         0                   78c03823911a2       storage-provisioner                                    kube-system
	2e2ac078f3f0b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           48 seconds ago      Running             kube-proxy                  0                   5a3123aa73dbf       kube-proxy-8ck8x                                       kube-system
	3e655d65400a5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           51 seconds ago      Running             kube-scheduler              0                   5bd9f06da2f5f       kube-scheduler-default-k8s-diff-port-098307            kube-system
	b39f44030d6ad       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           51 seconds ago      Running             kube-apiserver              0                   70ef5397d1f47       kube-apiserver-default-k8s-diff-port-098307            kube-system
	efcb2dcd55832       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           51 seconds ago      Running             kube-controller-manager     0                   661fa6e00ec90       kube-controller-manager-default-k8s-diff-port-098307   kube-system
	d21ba0b0da991       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           51 seconds ago      Running             etcd                        0                   9ed02d9490c53       etcd-default-k8s-diff-port-098307                      kube-system
	
	
	==> coredns [033562f6653c9ff0552e30ef3a659624de1155d4ab2ae6d29b2138a7aaf7c061] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37059 - 1858 "HINFO IN 4834109310681306563.3751431008118080161. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.122651924s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-098307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-098307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-098307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-098307
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:01:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-098307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                346f1d74-50ec-4327-a799-559dc98af4c4
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         87s
	  kube-system                 coredns-66bc5c9577-kzf7b                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     101s
	  kube-system                 etcd-default-k8s-diff-port-098307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         108s
	  kube-system                 kindnet-qswz4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      101s
	  kube-system                 kube-apiserver-default-k8s-diff-port-098307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-098307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-proxy-8ck8x                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kube-system                 kube-scheduler-default-k8s-diff-port-098307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         101s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l5qpb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wqmwj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 100s               kube-proxy       
	  Normal  Starting                 48s                kube-proxy       
	  Normal  NodeHasSufficientMemory  107s               kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s               kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s               kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientPID
	  Normal  Starting                 107s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           103s               node-controller  Node default-k8s-diff-port-098307 event: Registered Node default-k8s-diff-port-098307 in Controller
	  Normal  NodeReady                90s                kubelet          Node default-k8s-diff-port-098307 status is now: NodeReady
	  Normal  Starting                 52s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     52s (x8 over 52s)  kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                node-controller  Node default-k8s-diff-port-098307 event: Registered Node default-k8s-diff-port-098307 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [d21ba0b0da991a1e74ea43fb065cb9766681c65ea8b443a6386de6f40572612f] <==
	{"level":"warn","ts":"2025-11-24T14:00:34.857124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.865745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.872553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.879426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.892862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.899622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.907502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.914606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.927011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.933856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.940818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.948520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.955984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.961879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.969326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.976580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.982491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.996340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.002566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.015154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.021831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.028691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.076019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:47.178108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.761102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kzf7b\" limit:1 ","response":"range_response_count:1 size:5946"}
	{"level":"info","ts":"2025-11-24T14:00:47.178192Z","caller":"traceutil/trace.go:172","msg":"trace[1525900628] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-kzf7b; range_end:; response_count:1; response_revision:586; }","duration":"102.878235ms","start":"2025-11-24T14:00:47.075297Z","end":"2025-11-24T14:00:47.178175Z","steps":["trace[1525900628] 'range keys from in-memory index tree'  (duration: 102.608451ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:01:25 up  2:43,  0 user,  load average: 3.97, 3.33, 2.25
	Linux default-k8s-diff-port-098307 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29b2d0fa2f290d7f2973b915d97506754f7e042d52e71847abc829f4c5d59d98] <==
	I1124 14:00:36.888561       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:36.888772       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 14:00:36.888942       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:36.888961       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:36.888985       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:37.090816       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:37.090877       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:37.090912       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:37.091055       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:01:07.092158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:01:07.092168       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 14:01:07.093101       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:01:07.093109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1124 14:01:08.591943       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:01:08.591972       1 metrics.go:72] Registering metrics
	I1124 14:01:08.592063       1 controller.go:711] "Syncing nftables rules"
	I1124 14:01:17.098010       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 14:01:17.098051       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b39f44030d6ada4f06ee562f173b210839aa14cc4257bcab4e97acb016cd5680] <==
	I1124 14:00:35.554877       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:00:35.554928       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:00:35.554937       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:00:35.554943       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:00:35.554950       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:35.555164       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:00:35.555167       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:00:35.555235       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:00:35.559864       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:00:35.562442       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:00:35.562467       1 policy_source.go:240] refreshing policies
	I1124 14:00:35.583785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:35.601498       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:00:35.829721       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:00:35.859379       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:00:35.876768       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:35.883722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:35.895324       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:00:35.922788       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.103.166"}
	I1124 14:00:35.931330       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.179.146"}
	I1124 14:00:36.456917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:39.228080       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:39.228125       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:39.328214       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:39.433383       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [efcb2dcd558320e34a3d25837fb159e3f4dd2ff10a8fd5ce8d21450a8a027300] <==
	I1124 14:00:38.873942       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:00:38.873994       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:00:38.874059       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:00:38.874160       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:00:38.874274       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-098307"
	I1124 14:00:38.874330       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:00:38.873953       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:00:38.873966       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:00:38.874019       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:00:38.875078       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:00:38.875081       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:00:38.875105       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:00:38.875806       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:00:38.881818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:00:38.881834       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:00:38.881842       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:00:38.885370       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:00:38.892931       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:38.898300       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:00:38.898356       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:00:38.898389       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:00:38.898397       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:00:38.898402       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:00:38.900736       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:00:38.909066       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2e2ac078f3f0b7db0a740d8374ba34d253a8790349b72321d9682db61b4abb2a] <==
	I1124 14:00:36.745781       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:00:36.802042       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:36.902188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:36.902229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 14:00:36.902309       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:36.926397       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:36.926458       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:36.932364       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:36.932748       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:36.932788       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:36.934175       1 config.go:200] "Starting service config controller"
	I1124 14:00:36.934238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:36.934183       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:36.934328       1 config.go:309] "Starting node config controller"
	I1124 14:00:36.934660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:36.934673       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:36.934196       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:36.934693       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:36.934329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:37.034613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:37.034743       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 14:00:37.034762       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3e655d65400a54487d785f394fe12c8ced15c6f5d18334990e13f76babe2a555] <==
	I1124 14:00:34.568136       1 serving.go:386] Generated self-signed cert in-memory
	W1124 14:00:35.477462       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 14:00:35.477501       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:00:35.477531       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 14:00:35.477542       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:00:35.516092       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:00:35.516124       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:35.518863       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:35.518921       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:35.519387       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:00:35.519478       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:00:35.525712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:00:35.529649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:00:35.529737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:00:35.529826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:00:35.529913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:00:35.529999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:00:35.530113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 14:00:35.620033       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:00:39 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:39.506535     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cf3fb153-4e49-4ca7-9df1-98d9cb94e424-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5qpb\" (UID: \"cf3fb153-4e49-4ca7-9df1-98d9cb94e424\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb"
	Nov 24 14:00:39 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:39.506586     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdb5r\" (UniqueName: \"kubernetes.io/projected/cf3fb153-4e49-4ca7-9df1-98d9cb94e424-kube-api-access-xdb5r\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5qpb\" (UID: \"cf3fb153-4e49-4ca7-9df1-98d9cb94e424\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb"
	Nov 24 14:00:39 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:39.506613     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6c8af29b-f744-4b39-94fa-3b71fa5188ee-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-wqmwj\" (UID: \"6c8af29b-f744-4b39-94fa-3b71fa5188ee\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqmwj"
	Nov 24 14:00:43 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:43.386666     716 scope.go:117] "RemoveContainer" containerID="f38809cfbfd2d9f5af50c1a46da59aabad88a7f018ab45286bf18ef935008dfb"
	Nov 24 14:00:44 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:44.391316     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:00:44 default-k8s-diff-port-098307 kubelet[716]: E1124 14:00:44.391487     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:00:44 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:44.391502     716 scope.go:117] "RemoveContainer" containerID="f38809cfbfd2d9f5af50c1a46da59aabad88a7f018ab45286bf18ef935008dfb"
	Nov 24 14:00:45 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:45.395735     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:00:45 default-k8s-diff-port-098307 kubelet[716]: E1124 14:00:45.395923     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:00:46 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:46.398389     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:00:46 default-k8s-diff-port-098307 kubelet[716]: E1124 14:00:46.398646     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:00:50 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:50.232879     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqmwj" podStartSLOduration=3.25917679 podStartE2EDuration="11.232856846s" podCreationTimestamp="2025-11-24 14:00:39 +0000 UTC" firstStartedPulling="2025-11-24 14:00:39.791111079 +0000 UTC m=+6.548400941" lastFinishedPulling="2025-11-24 14:00:47.764791137 +0000 UTC m=+14.522080997" observedRunningTime="2025-11-24 14:00:48.428338636 +0000 UTC m=+15.185628507" watchObservedRunningTime="2025-11-24 14:00:50.232856846 +0000 UTC m=+16.990146723"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:01.337623     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:01.443757     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:01.444033     716 scope.go:117] "RemoveContainer" containerID="e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: E1124 14:01:01.444214     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:01:06 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:06.197781     716 scope.go:117] "RemoveContainer" containerID="e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	Nov 24 14:01:06 default-k8s-diff-port-098307 kubelet[716]: E1124 14:01:06.197979     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:01:07 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:07.461595     716 scope.go:117] "RemoveContainer" containerID="da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26"
	Nov 24 14:01:17 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:17.338153     716 scope.go:117] "RemoveContainer" containerID="e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	Nov 24 14:01:17 default-k8s-diff-port-098307 kubelet[716]: E1124 14:01:17.338394     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: kubelet.service: Consumed 1.546s CPU time.
	
	
	==> kubernetes-dashboard [90e245ba4cca96d1989f2bb7458706293f79c44d091efd4a6f13c934fa98aac6] <==
	2025/11/24 14:00:47 Starting overwatch
	2025/11/24 14:00:47 Using namespace: kubernetes-dashboard
	2025/11/24 14:00:47 Using in-cluster config to connect to apiserver
	2025/11/24 14:00:47 Using secret token for csrf signing
	2025/11/24 14:00:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:00:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:00:47 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:00:47 Generating JWE encryption key
	2025/11/24 14:00:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:00:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:00:48 Initializing JWE encryption key from synchronized object
	2025/11/24 14:00:48 Creating in-cluster Sidecar client
	2025/11/24 14:00:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:00:48 Serving insecurely on HTTP port: 9090
	2025/11/24 14:01:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed] <==
	I1124 14:01:07.507720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:01:07.515293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:01:07.515350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:01:07.517199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:10.972104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:15.232632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:18.831133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:21.885736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.907790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.913004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:24.913144       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:01:24.913245       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81dbd621-c84a-49e1-bca9-3457968fc43a", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-098307_887e559d-8eae-48c2-b33d-a1eb948081f6 became leader
	I1124 14:01:24.913354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-098307_887e559d-8eae-48c2-b33d-a1eb948081f6!
	W1124 14:01:24.915168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.918931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:25.013619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-098307_887e559d-8eae-48c2-b33d-a1eb948081f6!
	
	
	==> storage-provisioner [da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26] <==
	I1124 14:00:36.716964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:01:06.718414       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307: exit status 2 (342.919728ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-098307
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-098307:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948",
	        "Created": "2025-11-24T13:59:20.659772726Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 620062,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:00:25.531866474Z",
	            "FinishedAt": "2025-11-24T14:00:24.403959202Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/hostname",
	        "HostsPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/hosts",
	        "LogPath": "/var/lib/docker/containers/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948/bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948-json.log",
	        "Name": "/default-k8s-diff-port-098307",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "default-k8s-diff-port-098307:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-098307",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "bd0eb14a7bb05301fb7bd937daf3de032579eb0ef51ae85aca7bbf9d5079f948",
	                "LowerDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8b9802f0f7129508b126d28155eba29f729d36fdf91f74fe0dfcabd3bc59caec/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-098307",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-098307/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-098307",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-098307",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-098307",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "b523f902f8a37efbb23d47c6e81b0e0312774b7e0196cd1dac0e5afc2462b88e",
	            "SandboxKey": "/var/run/docker/netns/b523f902f8a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33473"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33474"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33477"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33475"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33476"
	                    }
	                ]
	            },
	            "Networks": {
	                "default-k8s-diff-port-098307": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "8c6a8563f604dbd2ac02c075d8fe7a50789753dd9a0a4910f48e583fa79e5934",
	                    "EndpointID": "04ac5e76f1515495779cdeff55b85de9ad6f5c4af463490697f9ceece2996df6",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "MacAddress": "12:7c:9d:e4:75:2b",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-098307",
	                        "bd0eb14a7bb0"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307: exit status 2 (335.398486ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-098307 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-098307 logs -n 25: (1.054977934s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                  ARGS                                                                  │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-165759 sudo cat /etc/kubernetes/kubelet.conf                                                                                   │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /var/lib/kubelet/config.yaml                                                                                   │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status docker --all --full --no-pager                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat docker --no-pager                                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/docker/daemon.json                                                                                        │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo docker system info                                                                                                 │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl status cri-docker --all --full --no-pager                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat cri-docker --no-pager                                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                           │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                     │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cri-dockerd --version                                                                                              │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status containerd --all --full --no-pager                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat containerd --no-pager                                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /lib/systemd/system/containerd.service                                                                         │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/containerd/config.toml                                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo containerd config dump                                                                                             │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status crio --all --full --no-pager                                                                      │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl cat crio --no-pager                                                                                      │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                            │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo crio config                                                                                                        │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ delete  │ -p auto-165759                                                                                                                         │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p calico-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio │ calico-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p kindnet-165759 pgrep -a kubelet                                                                                                     │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ default-k8s-diff-port-098307 image list --format=json                                                                                  │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ pause   │ -p default-k8s-diff-port-098307 --alsologtostderr -v=1                                                                                 │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:01:16
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:01:16.115638  633029 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:16.115907  633029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:16.115917  633029 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:16.115921  633029 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:16.116170  633029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:01:16.116657  633029 out.go:368] Setting JSON to false
	I1124 14:01:16.117946  633029 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9823,"bootTime":1763983053,"procs":320,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:01:16.118005  633029 start.go:143] virtualization: kvm guest
	I1124 14:01:16.119742  633029 out.go:179] * [calico-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:01:16.120856  633029 notify.go:221] Checking for updates...
	I1124 14:01:16.120871  633029 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:01:16.121940  633029 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:01:16.123489  633029 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:01:16.124521  633029 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:01:16.125539  633029 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:01:16.126544  633029 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:01:16.127898  633029 config.go:182] Loaded profile config "default-k8s-diff-port-098307": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:16.128015  633029 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:16.128136  633029 config.go:182] Loaded profile config "kindnet-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:16.128282  633029 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:01:16.153298  633029 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:01:16.153395  633029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:16.211093  633029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 14:01:16.20085004 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:16.211264  633029 docker.go:319] overlay module found
	I1124 14:01:16.213611  633029 out.go:179] * Using the docker driver based on user configuration
	I1124 14:01:16.214810  633029 start.go:309] selected driver: docker
	I1124 14:01:16.214825  633029 start.go:927] validating driver "docker" against <nil>
	I1124 14:01:16.214837  633029 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:01:16.215486  633029 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:16.271073  633029 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 14:01:16.261659875 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:16.271226  633029 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:01:16.271427  633029 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:01:16.272977  633029 out.go:179] * Using Docker driver with root privileges
	I1124 14:01:16.274116  633029 cni.go:84] Creating CNI manager for "calico"
	I1124 14:01:16.274139  633029 start_flags.go:336] Found "Calico" CNI - setting NetworkPlugin=cni
	I1124 14:01:16.274205  633029 start.go:353] cluster config:
	{Name:calico-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:
0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:01:16.275398  633029 out.go:179] * Starting "calico-165759" primary control-plane node in "calico-165759" cluster
	I1124 14:01:16.276382  633029 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:01:16.277469  633029 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:01:16.278637  633029 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:16.278673  633029 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:01:16.278682  633029 cache.go:65] Caching tarball of preloaded images
	I1124 14:01:16.278718  633029 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:01:16.278758  633029 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:01:16.278770  633029 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:01:16.278855  633029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/config.json ...
	I1124 14:01:16.278873  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/config.json: {Name:mkb6a17fc4f60ad81050e57901dc35443b8c60da Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:16.297532  633029 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:01:16.297552  633029 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:01:16.297567  633029 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:01:16.297599  633029 start.go:360] acquireMachinesLock for calico-165759: {Name:mk78f259fca7d2ac6d5e16a346a46567b2a44671 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:01:16.297685  633029 start.go:364] duration metric: took 71.402µs to acquireMachinesLock for "calico-165759"
	I1124 14:01:16.297706  633029 start.go:93] Provisioning new machine with config: &{Name:calico-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:01:16.297769  633029 start.go:125] createHost starting for "" (driver="docker")
	W1124 14:01:13.081631  622437 pod_ready.go:104] pod "coredns-66bc5c9577-nnp2c" is not "Ready", error: <nil>
	W1124 14:01:15.581673  622437 pod_ready.go:104] pod "coredns-66bc5c9577-nnp2c" is not "Ready", error: <nil>
	W1124 14:01:17.582018  622437 pod_ready.go:104] pod "coredns-66bc5c9577-nnp2c" is not "Ready", error: <nil>
	I1124 14:01:19.082331  622437 pod_ready.go:94] pod "coredns-66bc5c9577-nnp2c" is "Ready"
	I1124 14:01:19.082362  622437 pod_ready.go:86] duration metric: took 36.005745038s for pod "coredns-66bc5c9577-nnp2c" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.084829  622437 pod_ready.go:83] waiting for pod "etcd-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.088356  622437 pod_ready.go:94] pod "etcd-embed-certs-456660" is "Ready"
	I1124 14:01:19.088382  622437 pod_ready.go:86] duration metric: took 3.529566ms for pod "etcd-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.090320  622437 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.093678  622437 pod_ready.go:94] pod "kube-apiserver-embed-certs-456660" is "Ready"
	I1124 14:01:19.093697  622437 pod_ready.go:86] duration metric: took 3.35724ms for pod "kube-apiserver-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.095478  622437 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.281072  622437 pod_ready.go:94] pod "kube-controller-manager-embed-certs-456660" is "Ready"
	I1124 14:01:19.281098  622437 pod_ready.go:86] duration metric: took 185.602415ms for pod "kube-controller-manager-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.481648  622437 pod_ready.go:83] waiting for pod "kube-proxy-k5bxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:19.880455  622437 pod_ready.go:94] pod "kube-proxy-k5bxk" is "Ready"
	I1124 14:01:19.880488  622437 pod_ready.go:86] duration metric: took 398.805191ms for pod "kube-proxy-k5bxk" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:20.080090  622437 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:20.480860  622437 pod_ready.go:94] pod "kube-scheduler-embed-certs-456660" is "Ready"
	I1124 14:01:20.480885  622437 pod_ready.go:86] duration metric: took 400.773584ms for pod "kube-scheduler-embed-certs-456660" in "kube-system" namespace to be "Ready" or be gone ...
	I1124 14:01:20.480917  622437 pod_ready.go:40] duration metric: took 37.40753881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1124 14:01:20.524827  622437 start.go:625] kubectl: 1.34.2, cluster: 1.34.1 (minor skew: 0)
	I1124 14:01:20.551088  622437 out.go:179] * Done! kubectl is now configured to use "embed-certs-456660" cluster and "default" namespace by default
	I1124 14:01:16.299325  633029 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:01:16.299579  633029 start.go:159] libmachine.API.Create for "calico-165759" (driver="docker")
	I1124 14:01:16.299608  633029 client.go:173] LocalClient.Create starting
	I1124 14:01:16.299678  633029 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 14:01:16.299712  633029 main.go:143] libmachine: Decoding PEM data...
	I1124 14:01:16.299731  633029 main.go:143] libmachine: Parsing certificate...
	I1124 14:01:16.299800  633029 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 14:01:16.299821  633029 main.go:143] libmachine: Decoding PEM data...
	I1124 14:01:16.299832  633029 main.go:143] libmachine: Parsing certificate...
	I1124 14:01:16.300193  633029 cli_runner.go:164] Run: docker network inspect calico-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:01:16.315317  633029 cli_runner.go:211] docker network inspect calico-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:01:16.315377  633029 network_create.go:284] running [docker network inspect calico-165759] to gather additional debugging logs...
	I1124 14:01:16.315393  633029 cli_runner.go:164] Run: docker network inspect calico-165759
	W1124 14:01:16.331245  633029 cli_runner.go:211] docker network inspect calico-165759 returned with exit code 1
	I1124 14:01:16.331267  633029 network_create.go:287] error running [docker network inspect calico-165759]: docker network inspect calico-165759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network calico-165759 not found
	I1124 14:01:16.331277  633029 network_create.go:289] output of [docker network inspect calico-165759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network calico-165759 not found
	
	** /stderr **
	I1124 14:01:16.331392  633029 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:01:16.348282  633029 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 14:01:16.349088  633029 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 14:01:16.349551  633029 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 14:01:16.350374  633029 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e78c80}
	I1124 14:01:16.350408  633029 network_create.go:124] attempt to create docker network calico-165759 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1124 14:01:16.350449  633029 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=calico-165759 calico-165759
	I1124 14:01:16.397295  633029 network_create.go:108] docker network calico-165759 192.168.76.0/24 created
	I1124 14:01:16.397325  633029 kic.go:121] calculated static IP "192.168.76.2" for the "calico-165759" container
	I1124 14:01:16.397375  633029 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:01:16.413683  633029 cli_runner.go:164] Run: docker volume create calico-165759 --label name.minikube.sigs.k8s.io=calico-165759 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:01:16.431631  633029 oci.go:103] Successfully created a docker volume calico-165759
	I1124 14:01:16.431698  633029 cli_runner.go:164] Run: docker run --rm --name calico-165759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-165759 --entrypoint /usr/bin/test -v calico-165759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:01:16.811395  633029 oci.go:107] Successfully prepared a docker volume calico-165759
	I1124 14:01:16.811479  633029 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:16.811494  633029 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:01:16.811551  633029 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:01:21.252465  633029 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v calico-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir: (4.440856637s)
	I1124 14:01:21.252502  633029 kic.go:203] duration metric: took 4.441002865s to extract preloaded images to volume ...
	W1124 14:01:21.252605  633029 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1124 14:01:21.252675  633029 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1124 14:01:21.252715  633029 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1124 14:01:21.339041  633029 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-165759 --name calico-165759 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-165759 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-165759 --network calico-165759 --ip 192.168.76.2 --volume calico-165759:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f
	I1124 14:01:21.755229  633029 cli_runner.go:164] Run: docker container inspect calico-165759 --format={{.State.Running}}
	I1124 14:01:21.778939  633029 cli_runner.go:164] Run: docker container inspect calico-165759 --format={{.State.Status}}
	I1124 14:01:21.803169  633029 cli_runner.go:164] Run: docker exec calico-165759 stat /var/lib/dpkg/alternatives/iptables
	I1124 14:01:21.853164  633029 oci.go:144] the created container "calico-165759" has a running status.
	I1124 14:01:21.853219  633029 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa...
	I1124 14:01:22.046655  633029 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1124 14:01:22.080535  633029 cli_runner.go:164] Run: docker container inspect calico-165759 --format={{.State.Status}}
	I1124 14:01:22.110676  633029 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1124 14:01:22.110722  633029 kic_runner.go:114] Args: [docker exec --privileged calico-165759 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1124 14:01:22.172401  633029 cli_runner.go:164] Run: docker container inspect calico-165759 --format={{.State.Status}}
	I1124 14:01:22.193834  633029 machine.go:94] provisionDockerMachine start ...
	I1124 14:01:22.194013  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:22.219802  633029 main.go:143] libmachine: Using SSH client type: native
	I1124 14:01:22.271590  633029 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1124 14:01:22.271613  633029 main.go:143] libmachine: About to run SSH command:
	hostname
	I1124 14:01:22.425292  633029 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-165759
	
	I1124 14:01:22.425314  633029 ubuntu.go:182] provisioning hostname "calico-165759"
	I1124 14:01:22.425433  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:22.447435  633029 main.go:143] libmachine: Using SSH client type: native
	I1124 14:01:22.447787  633029 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1124 14:01:22.447805  633029 main.go:143] libmachine: About to run SSH command:
	sudo hostname calico-165759 && echo "calico-165759" | sudo tee /etc/hostname
	I1124 14:01:22.614484  633029 main.go:143] libmachine: SSH cmd err, output: <nil>: calico-165759
	
	I1124 14:01:22.614566  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:22.637723  633029 main.go:143] libmachine: Using SSH client type: native
	I1124 14:01:22.638156  633029 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1124 14:01:22.638187  633029 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-165759' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-165759/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-165759' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1124 14:01:22.794287  633029 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1124 14:01:22.794318  633029 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21932-348000/.minikube CaCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21932-348000/.minikube}
	I1124 14:01:22.794342  633029 ubuntu.go:190] setting up certificates
	I1124 14:01:22.794363  633029 provision.go:84] configureAuth start
	I1124 14:01:22.794432  633029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-165759
	I1124 14:01:22.815803  633029 provision.go:143] copyHostCerts
	I1124 14:01:22.815870  633029 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem, removing ...
	I1124 14:01:22.815884  633029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem
	I1124 14:01:22.815985  633029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/ca.pem (1078 bytes)
	I1124 14:01:22.816137  633029 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem, removing ...
	I1124 14:01:22.816159  633029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem
	I1124 14:01:22.816218  633029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/cert.pem (1123 bytes)
	I1124 14:01:22.816323  633029 exec_runner.go:144] found /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem, removing ...
	I1124 14:01:22.816334  633029 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem
	I1124 14:01:22.816373  633029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21932-348000/.minikube/key.pem (1675 bytes)
	I1124 14:01:22.816471  633029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem org=jenkins.calico-165759 san=[127.0.0.1 192.168.76.2 calico-165759 localhost minikube]
	I1124 14:01:22.871234  633029 provision.go:177] copyRemoteCerts
	I1124 14:01:22.871288  633029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1124 14:01:22.871331  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:22.889303  633029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa Username:docker}
	I1124 14:01:22.989884  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1124 14:01:23.010831  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1124 14:01:23.028810  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1124 14:01:23.045360  633029 provision.go:87] duration metric: took 250.977721ms to configureAuth
	I1124 14:01:23.045389  633029 ubuntu.go:206] setting minikube options for container-runtime
	I1124 14:01:23.045548  633029 config.go:182] Loaded profile config "calico-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:23.045643  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:23.062789  633029 main.go:143] libmachine: Using SSH client type: native
	I1124 14:01:23.063062  633029 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x84d1c0] 0x84fe60 <nil>  [] 0s} 127.0.0.1 33489 <nil> <nil>}
	I1124 14:01:23.063083  633029 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1124 14:01:23.348957  633029 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1124 14:01:23.348986  633029 machine.go:97] duration metric: took 1.155130607s to provisionDockerMachine
	I1124 14:01:23.348998  633029 client.go:176] duration metric: took 7.049382203s to LocalClient.Create
	I1124 14:01:23.349025  633029 start.go:167] duration metric: took 7.049441816s to libmachine.API.Create "calico-165759"
	I1124 14:01:23.349038  633029 start.go:293] postStartSetup for "calico-165759" (driver="docker")
	I1124 14:01:23.349057  633029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1124 14:01:23.349120  633029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1124 14:01:23.349184  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:23.366503  633029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa Username:docker}
	I1124 14:01:23.467985  633029 ssh_runner.go:195] Run: cat /etc/os-release
	I1124 14:01:23.471525  633029 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1124 14:01:23.471556  633029 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1124 14:01:23.471569  633029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/addons for local assets ...
	I1124 14:01:23.471628  633029 filesync.go:126] Scanning /home/jenkins/minikube-integration/21932-348000/.minikube/files for local assets ...
	I1124 14:01:23.471748  633029 filesync.go:149] local asset: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem -> 3515932.pem in /etc/ssl/certs
	I1124 14:01:23.471871  633029 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1124 14:01:23.479197  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:01:23.505238  633029 start.go:296] duration metric: took 156.18249ms for postStartSetup
	I1124 14:01:23.505520  633029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-165759
	I1124 14:01:23.523528  633029 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/config.json ...
	I1124 14:01:23.523746  633029 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 14:01:23.523788  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:23.540569  633029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa Username:docker}
	I1124 14:01:23.637729  633029 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1124 14:01:23.642421  633029 start.go:128] duration metric: took 7.344636477s to createHost
	I1124 14:01:23.642452  633029 start.go:83] releasing machines lock for "calico-165759", held for 7.344755355s
	I1124 14:01:23.642521  633029 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-165759
	I1124 14:01:23.663069  633029 ssh_runner.go:195] Run: cat /version.json
	I1124 14:01:23.663118  633029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1124 14:01:23.663128  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:23.663202  633029 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-165759
	I1124 14:01:23.683092  633029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa Username:docker}
	I1124 14:01:23.683496  633029 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33489 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/calico-165759/id_rsa Username:docker}
	I1124 14:01:23.848568  633029 ssh_runner.go:195] Run: systemctl --version
	I1124 14:01:23.854474  633029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1124 14:01:23.887987  633029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1124 14:01:23.892243  633029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1124 14:01:23.892308  633029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1124 14:01:23.916146  633029 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1124 14:01:23.916163  633029 start.go:496] detecting cgroup driver to use...
	I1124 14:01:23.916193  633029 detect.go:190] detected "systemd" cgroup driver on host os
	I1124 14:01:23.916241  633029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1124 14:01:23.931695  633029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1124 14:01:23.943111  633029 docker.go:218] disabling cri-docker service (if available) ...
	I1124 14:01:23.943157  633029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1124 14:01:23.958300  633029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1124 14:01:23.974407  633029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1124 14:01:24.055248  633029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1124 14:01:24.145678  633029 docker.go:234] disabling docker service ...
	I1124 14:01:24.145743  633029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1124 14:01:24.172578  633029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1124 14:01:24.185023  633029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1124 14:01:24.282431  633029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1124 14:01:24.375452  633029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1124 14:01:24.389755  633029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1124 14:01:24.406154  633029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1124 14:01:24.406225  633029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.416628  633029 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1124 14:01:24.416677  633029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.425843  633029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.434545  633029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.443877  633029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1124 14:01:24.452424  633029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.461523  633029 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.477751  633029 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1124 14:01:24.486913  633029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1124 14:01:24.495128  633029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1124 14:01:24.502818  633029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:01:24.589328  633029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1124 14:01:24.794162  633029 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1124 14:01:24.794224  633029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1124 14:01:24.798144  633029 start.go:564] Will wait 60s for crictl version
	I1124 14:01:24.798188  633029 ssh_runner.go:195] Run: which crictl
	I1124 14:01:24.801621  633029 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1124 14:01:24.826628  633029 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.2
	RuntimeApiVersion:  v1
	I1124 14:01:24.826709  633029 ssh_runner.go:195] Run: crio --version
	I1124 14:01:24.862831  633029 ssh_runner.go:195] Run: crio --version
	I1124 14:01:24.894157  633029 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.2 ...
	I1124 14:01:24.895504  633029 cli_runner.go:164] Run: docker network inspect calico-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:01:24.913074  633029 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1124 14:01:24.918277  633029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:01:24.929366  633029 kubeadm.go:884] updating cluster {Name:calico-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1124 14:01:24.929476  633029 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:24.929519  633029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:01:24.962338  633029 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:01:24.962356  633029 crio.go:433] Images already preloaded, skipping extraction
	I1124 14:01:24.962399  633029 ssh_runner.go:195] Run: sudo crictl images --output json
	I1124 14:01:24.987602  633029 crio.go:514] all images are preloaded for cri-o runtime.
	I1124 14:01:24.987624  633029 cache_images.go:86] Images are preloaded, skipping loading
	I1124 14:01:24.987633  633029 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1124 14:01:24.987740  633029 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=calico-165759 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico}
	I1124 14:01:24.987813  633029 ssh_runner.go:195] Run: crio config
	I1124 14:01:25.038886  633029 cni.go:84] Creating CNI manager for "calico"
	I1124 14:01:25.038930  633029 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1124 14:01:25.038960  633029 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-165759 NodeName:calico-165759 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1124 14:01:25.039128  633029 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "calico-165759"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1124 14:01:25.039196  633029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1124 14:01:25.047554  633029 binaries.go:51] Found k8s binaries, skipping transfer
	I1124 14:01:25.047605  633029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1124 14:01:25.056811  633029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1124 14:01:25.071052  633029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1124 14:01:25.086096  633029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1124 14:01:25.099755  633029 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1124 14:01:25.103587  633029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1124 14:01:25.113679  633029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1124 14:01:25.203303  633029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1124 14:01:25.227397  633029 certs.go:69] Setting up /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759 for IP: 192.168.76.2
	I1124 14:01:25.227417  633029 certs.go:195] generating shared ca certs ...
	I1124 14:01:25.227438  633029 certs.go:227] acquiring lock for ca certs: {Name:mk929c5478505d0d4647158f3ccc02830de7b582 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.227588  633029 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key
	I1124 14:01:25.227637  633029 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key
	I1124 14:01:25.227649  633029 certs.go:257] generating profile certs ...
	I1124 14:01:25.227722  633029 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/client.key
	I1124 14:01:25.227742  633029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/client.crt with IP's: []
	I1124 14:01:25.310149  633029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/client.crt ...
	I1124 14:01:25.310171  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/client.crt: {Name:mkeae951c3f0b0cdafeca79ca68a9e9db5b87633 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.310333  633029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/client.key ...
	I1124 14:01:25.310348  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/client.key: {Name:mkd1f2956b4d22aee9ce10dc94504051a3a0856e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.310454  633029 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.key.e8878636
	I1124 14:01:25.310476  633029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.crt.e8878636 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1124 14:01:25.528802  633029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.crt.e8878636 ...
	I1124 14:01:25.528829  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.crt.e8878636: {Name:mk18d213de1908ea8996e050835c99aa9c2c89f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.529004  633029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.key.e8878636 ...
	I1124 14:01:25.529025  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.key.e8878636: {Name:mk95a33658ed2629ae4e9fafe84972c50211e788 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.529123  633029 certs.go:382] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.crt.e8878636 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.crt
	I1124 14:01:25.529219  633029 certs.go:386] copying /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.key.e8878636 -> /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.key
	I1124 14:01:25.529301  633029 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.key
	I1124 14:01:25.529324  633029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.crt with IP's: []
	I1124 14:01:25.679647  633029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.crt ...
	I1124 14:01:25.679670  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.crt: {Name:mk84c46d82df29ef34df18feff2a1e5d077ded42 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.679814  633029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.key ...
	I1124 14:01:25.679827  633029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.key: {Name:mk5d50b9730745a4bdedbd35426186113a98cb99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:25.680019  633029 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem (1338 bytes)
	W1124 14:01:25.680062  633029 certs.go:480] ignoring /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593_empty.pem, impossibly tiny 0 bytes
	I1124 14:01:25.680072  633029 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca-key.pem (1675 bytes)
	I1124 14:01:25.680099  633029 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem (1078 bytes)
	I1124 14:01:25.680124  633029 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem (1123 bytes)
	I1124 14:01:25.680146  633029 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/certs/key.pem (1675 bytes)
	I1124 14:01:25.680199  633029 certs.go:484] found cert: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem (1708 bytes)
	I1124 14:01:25.680729  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1124 14:01:25.698816  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1124 14:01:25.716331  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1124 14:01:25.734238  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1124 14:01:25.752503  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1124 14:01:25.769993  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1124 14:01:25.788386  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1124 14:01:25.809590  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/calico-165759/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1124 14:01:25.828363  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/ssl/certs/3515932.pem --> /usr/share/ca-certificates/3515932.pem (1708 bytes)
	I1124 14:01:25.847692  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1124 14:01:25.866011  633029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21932-348000/.minikube/certs/351593.pem --> /usr/share/ca-certificates/351593.pem (1338 bytes)
	I1124 14:01:25.885315  633029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1124 14:01:25.898456  633029 ssh_runner.go:195] Run: openssl version
	I1124 14:01:25.904429  633029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/3515932.pem && ln -fs /usr/share/ca-certificates/3515932.pem /etc/ssl/certs/3515932.pem"
	I1124 14:01:25.913735  633029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/3515932.pem
	I1124 14:01:25.917605  633029 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov 24 13:19 /usr/share/ca-certificates/3515932.pem
	I1124 14:01:25.917655  633029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/3515932.pem
	I1124 14:01:25.952006  633029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/3515932.pem /etc/ssl/certs/3ec20f2e.0"
	I1124 14:01:25.960124  633029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1124 14:01:25.968407  633029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:01:25.972051  633029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov 24 13:14 /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:01:25.972098  633029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1124 14:01:26.010334  633029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1124 14:01:26.019711  633029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/351593.pem && ln -fs /usr/share/ca-certificates/351593.pem /etc/ssl/certs/351593.pem"
	I1124 14:01:26.028122  633029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/351593.pem
	I1124 14:01:26.031953  633029 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov 24 13:19 /usr/share/ca-certificates/351593.pem
	I1124 14:01:26.032003  633029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/351593.pem
	I1124 14:01:26.073316  633029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/351593.pem /etc/ssl/certs/51391683.0"
	I1124 14:01:26.084478  633029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1124 14:01:26.088708  633029 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1124 14:01:26.088766  633029 kubeadm.go:401] StartCluster: {Name:calico-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:calico-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: Soc
ketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:01:26.088846  633029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1124 14:01:26.088919  633029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1124 14:01:26.119416  633029 cri.go:89] found id: ""
	I1124 14:01:26.119474  633029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1124 14:01:26.127776  633029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1124 14:01:26.136177  633029 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1124 14:01:26.136229  633029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1124 14:01:26.144550  633029 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1124 14:01:26.144568  633029 kubeadm.go:158] found existing configuration files:
	
	I1124 14:01:26.144608  633029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1124 14:01:26.153772  633029 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1124 14:01:26.153827  633029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1124 14:01:26.161255  633029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1124 14:01:26.168592  633029 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1124 14:01:26.168628  633029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1124 14:01:26.175573  633029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1124 14:01:26.183280  633029 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1124 14:01:26.183327  633029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1124 14:01:26.191003  633029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1124 14:01:26.198880  633029 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1124 14:01:26.198954  633029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1124 14:01:26.207250  633029 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1124 14:01:26.247390  633029 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1124 14:01:26.247444  633029 kubeadm.go:319] [preflight] Running pre-flight checks
	I1124 14:01:26.267934  633029 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1124 14:01:26.268050  633029 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1044-gcp
	I1124 14:01:26.268135  633029 kubeadm.go:319] OS: Linux
	I1124 14:01:26.268200  633029 kubeadm.go:319] CGROUPS_CPU: enabled
	I1124 14:01:26.268276  633029 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1124 14:01:26.268347  633029 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1124 14:01:26.268433  633029 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1124 14:01:26.268506  633029 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1124 14:01:26.268576  633029 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1124 14:01:26.268640  633029 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1124 14:01:26.268713  633029 kubeadm.go:319] CGROUPS_IO: enabled
	I1124 14:01:26.332560  633029 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1124 14:01:26.332733  633029 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1124 14:01:26.332860  633029 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1124 14:01:26.340698  633029 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	
	
	==> CRI-O <==
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.468869768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.469097347Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/70c39b1e4e9c9e316551f51b4fa40d3321778f14fc44da1eced0d904ff128cfe/merged/etc/passwd: no such file or directory"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.469133675Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/70c39b1e4e9c9e316551f51b4fa40d3321778f14fc44da1eced0d904ff128cfe/merged/etc/group: no such file or directory"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.46942714Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.493154376Z" level=info msg="Created container 182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed: kube-system/storage-provisioner/storage-provisioner" id=1a227bf7-0536-494f-a5a3-66b428206411 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.493781608Z" level=info msg="Starting container: 182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed" id=6871be2a-9c32-4539-a67a-24edf75fdecc name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:01:07 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:07.495877177Z" level=info msg="Started container" PID=1687 containerID=182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed description=kube-system/storage-provisioner/storage-provisioner id=6871be2a-9c32-4539-a67a-24edf75fdecc name=/runtime.v1.RuntimeService/StartContainer sandboxID=78c03823911a20545998a1e0e93ab9b965b376fdf807276e294e6cb0d41a1b3b
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.098318413Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.103150738Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.103179412Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.103197237Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.106650924Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.106675843Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.106691561Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.110353177Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.110378995Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.110399822Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.114108959Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.114133989Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.114155709Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.117530862Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.117553769Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.117573316Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.121262014Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 24 14:01:17 default-k8s-diff-port-098307 crio[563]: time="2025-11-24T14:01:17.121279425Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	182a4de9d25bc       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   78c03823911a2       storage-provisioner                                    kube-system
	e39fdc894045a       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           25 seconds ago      Exited              dashboard-metrics-scraper   2                   efddaf5553512       dashboard-metrics-scraper-6ffb444bf9-l5qpb             kubernetes-dashboard
	90e245ba4cca9       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   39 seconds ago      Running             kubernetes-dashboard        0                   c307ff22b87e4       kubernetes-dashboard-855c9754f9-wqmwj                  kubernetes-dashboard
	033562f6653c9       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   ae16fb75d1fb0       coredns-66bc5c9577-kzf7b                               kube-system
	e1d48b30c6c1f       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   aabf593d2af41       busybox                                                default
	29b2d0fa2f290       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   e91d1617d722e       kindnet-qswz4                                          kube-system
	da2489f88ad25       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   78c03823911a2       storage-provisioner                                    kube-system
	2e2ac078f3f0b       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   5a3123aa73dbf       kube-proxy-8ck8x                                       kube-system
	3e655d65400a5       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           53 seconds ago      Running             kube-scheduler              0                   5bd9f06da2f5f       kube-scheduler-default-k8s-diff-port-098307            kube-system
	b39f44030d6ad       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           53 seconds ago      Running             kube-apiserver              0                   70ef5397d1f47       kube-apiserver-default-k8s-diff-port-098307            kube-system
	efcb2dcd55832       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           53 seconds ago      Running             kube-controller-manager     0                   661fa6e00ec90       kube-controller-manager-default-k8s-diff-port-098307   kube-system
	d21ba0b0da991       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           53 seconds ago      Running             etcd                        0                   9ed02d9490c53       etcd-default-k8s-diff-port-098307                      kube-system
	
	
	==> coredns [033562f6653c9ff0552e30ef3a659624de1155d4ab2ae6d29b2138a7aaf7c061] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 66f0a748f44f6317a6b122af3f457c9dd0ecaed8718ffbf95a69434523efd9ec4992e71f54c7edd5753646fe9af89ac2138b9c3ce14d4a0ba9d2372a55f120bb
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:37059 - 1858 "HINFO IN 4834109310681306563.3751431008118080161. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.122651924s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-098307
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-098307
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=default-k8s-diff-port-098307
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-098307
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:01:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:01:06 +0000   Mon, 24 Nov 2025 13:59:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    default-k8s-diff-port-098307
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                346f1d74-50ec-4327-a799-559dc98af4c4
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 coredns-66bc5c9577-kzf7b                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     103s
	  kube-system                 etcd-default-k8s-diff-port-098307                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         110s
	  kube-system                 kindnet-qswz4                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      103s
	  kube-system                 kube-apiserver-default-k8s-diff-port-098307             250m (3%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-098307    200m (2%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-proxy-8ck8x                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kube-system                 kube-scheduler-default-k8s-diff-port-098307             100m (1%)     0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-l5qpb              0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-wqmwj                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 102s               kube-proxy       
	  Normal  Starting                 50s                kube-proxy       
	  Normal  NodeHasSufficientMemory  109s               kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    109s               kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     109s               kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientPID
	  Normal  Starting                 109s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s               node-controller  Node default-k8s-diff-port-098307 event: Registered Node default-k8s-diff-port-098307 in Controller
	  Normal  NodeReady                92s                kubelet          Node default-k8s-diff-port-098307 status is now: NodeReady
	  Normal  Starting                 54s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     54s (x8 over 54s)  kubelet          Node default-k8s-diff-port-098307 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           49s                node-controller  Node default-k8s-diff-port-098307 event: Registered Node default-k8s-diff-port-098307 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [d21ba0b0da991a1e74ea43fb065cb9766681c65ea8b443a6386de6f40572612f] <==
	{"level":"warn","ts":"2025-11-24T14:00:34.857124Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.865745Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.872553Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.879426Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40628","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.892862Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40662","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.899622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.907502Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40684","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.914606Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40688","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.927011Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40718","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.933856Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40732","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.940818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.948520Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.955984Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40796","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.961879Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40800","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.969326Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40816","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.976580Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.982491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:34.996340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40890","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.002566Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.015154Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.021831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40956","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.028691Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40974","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:35.076019Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:40994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:47.178108Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.761102ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-kzf7b\" limit:1 ","response":"range_response_count:1 size:5946"}
	{"level":"info","ts":"2025-11-24T14:00:47.178192Z","caller":"traceutil/trace.go:172","msg":"trace[1525900628] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-kzf7b; range_end:; response_count:1; response_revision:586; }","duration":"102.878235ms","start":"2025-11-24T14:00:47.075297Z","end":"2025-11-24T14:00:47.178175Z","steps":["trace[1525900628] 'range keys from in-memory index tree'  (duration: 102.608451ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:01:27 up  2:43,  0 user,  load average: 3.89, 3.32, 2.25
	Linux default-k8s-diff-port-098307 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [29b2d0fa2f290d7f2973b915d97506754f7e042d52e71847abc829f4c5d59d98] <==
	I1124 14:00:36.888561       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:36.888772       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1124 14:00:36.888942       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:36.888961       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:36.888985       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:37Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:37.090816       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:37.090877       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:37.090912       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:37.091055       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1124 14:01:07.092158       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1124 14:01:07.092168       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1124 14:01:07.093101       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1124 14:01:07.093109       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	I1124 14:01:08.591943       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:01:08.591972       1 metrics.go:72] Registering metrics
	I1124 14:01:08.592063       1 controller.go:711] "Syncing nftables rules"
	I1124 14:01:17.098010       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 14:01:17.098051       1 main.go:301] handling current node
	I1124 14:01:27.099965       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1124 14:01:27.100000       1 main.go:301] handling current node
	
	
	==> kube-apiserver [b39f44030d6ada4f06ee562f173b210839aa14cc4257bcab4e97acb016cd5680] <==
	I1124 14:00:35.554877       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:00:35.554928       1 aggregator.go:171] initial CRD sync complete...
	I1124 14:00:35.554937       1 autoregister_controller.go:144] Starting autoregister controller
	I1124 14:00:35.554943       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1124 14:00:35.554950       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:35.555164       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:00:35.555167       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:00:35.555235       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:00:35.559864       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:00:35.562442       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:00:35.562467       1 policy_source.go:240] refreshing policies
	I1124 14:00:35.583785       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:35.601498       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1124 14:00:35.829721       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:00:35.859379       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:00:35.876768       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:35.883722       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:35.895324       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:00:35.922788       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.98.103.166"}
	I1124 14:00:35.931330       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.103.179.146"}
	I1124 14:00:36.456917       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:39.228080       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:39.228125       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:39.328214       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:39.433383       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [efcb2dcd558320e34a3d25837fb159e3f4dd2ff10a8fd5ce8d21450a8a027300] <==
	I1124 14:00:38.873942       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:00:38.873994       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:00:38.874059       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1124 14:00:38.874160       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:00:38.874274       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-098307"
	I1124 14:00:38.874330       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:00:38.873953       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:00:38.873966       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1124 14:00:38.874019       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:00:38.875078       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:00:38.875081       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1124 14:00:38.875105       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:00:38.875806       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:00:38.881818       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1124 14:00:38.881834       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1124 14:00:38.881842       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1124 14:00:38.885370       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1124 14:00:38.892931       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:38.898300       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1124 14:00:38.898356       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1124 14:00:38.898389       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1124 14:00:38.898397       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1124 14:00:38.898402       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1124 14:00:38.900736       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:00:38.909066       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [2e2ac078f3f0b7db0a740d8374ba34d253a8790349b72321d9682db61b4abb2a] <==
	I1124 14:00:36.745781       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:00:36.802042       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:36.902188       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:36.902229       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1124 14:00:36.902309       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:36.926397       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:36.926458       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:36.932364       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:36.932748       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:36.932788       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:36.934175       1 config.go:200] "Starting service config controller"
	I1124 14:00:36.934238       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:36.934183       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:36.934328       1 config.go:309] "Starting node config controller"
	I1124 14:00:36.934660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:36.934673       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:36.934196       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:36.934693       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:36.934329       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:37.034613       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:37.034743       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 14:00:37.034762       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [3e655d65400a54487d785f394fe12c8ced15c6f5d18334990e13f76babe2a555] <==
	I1124 14:00:34.568136       1 serving.go:386] Generated self-signed cert in-memory
	W1124 14:00:35.477462       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1124 14:00:35.477501       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1124 14:00:35.477531       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1124 14:00:35.477542       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1124 14:00:35.516092       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:00:35.516124       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:35.518863       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:35.518921       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:35.519387       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:00:35.519478       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	E1124 14:00:35.525712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1124 14:00:35.529649       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found, clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1124 14:00:35.529737       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1124 14:00:35.529826       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1124 14:00:35.529913       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1124 14:00:35.529999       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1124 14:00:35.530113       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope: RBAC: [clusterrole.rbac.authorization.k8s.io \"system:kube-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found, clusterrole.rbac.authorization.k8s.io \"system:volume-scheduler\" not found, clusterrole.rbac.authorization.k8s.io \"system:discovery\" not found, clusterrole.rbac.authorization.k8s.io \"system:basic-user\" not found]" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	I1124 14:00:35.620033       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 24 14:00:39 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:39.506535     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cf3fb153-4e49-4ca7-9df1-98d9cb94e424-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5qpb\" (UID: \"cf3fb153-4e49-4ca7-9df1-98d9cb94e424\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb"
	Nov 24 14:00:39 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:39.506586     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdb5r\" (UniqueName: \"kubernetes.io/projected/cf3fb153-4e49-4ca7-9df1-98d9cb94e424-kube-api-access-xdb5r\") pod \"dashboard-metrics-scraper-6ffb444bf9-l5qpb\" (UID: \"cf3fb153-4e49-4ca7-9df1-98d9cb94e424\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb"
	Nov 24 14:00:39 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:39.506613     716 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/6c8af29b-f744-4b39-94fa-3b71fa5188ee-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-wqmwj\" (UID: \"6c8af29b-f744-4b39-94fa-3b71fa5188ee\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqmwj"
	Nov 24 14:00:43 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:43.386666     716 scope.go:117] "RemoveContainer" containerID="f38809cfbfd2d9f5af50c1a46da59aabad88a7f018ab45286bf18ef935008dfb"
	Nov 24 14:00:44 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:44.391316     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:00:44 default-k8s-diff-port-098307 kubelet[716]: E1124 14:00:44.391487     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:00:44 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:44.391502     716 scope.go:117] "RemoveContainer" containerID="f38809cfbfd2d9f5af50c1a46da59aabad88a7f018ab45286bf18ef935008dfb"
	Nov 24 14:00:45 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:45.395735     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:00:45 default-k8s-diff-port-098307 kubelet[716]: E1124 14:00:45.395923     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:00:46 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:46.398389     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:00:46 default-k8s-diff-port-098307 kubelet[716]: E1124 14:00:46.398646     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:00:50 default-k8s-diff-port-098307 kubelet[716]: I1124 14:00:50.232879     716 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-wqmwj" podStartSLOduration=3.25917679 podStartE2EDuration="11.232856846s" podCreationTimestamp="2025-11-24 14:00:39 +0000 UTC" firstStartedPulling="2025-11-24 14:00:39.791111079 +0000 UTC m=+6.548400941" lastFinishedPulling="2025-11-24 14:00:47.764791137 +0000 UTC m=+14.522080997" observedRunningTime="2025-11-24 14:00:48.428338636 +0000 UTC m=+15.185628507" watchObservedRunningTime="2025-11-24 14:00:50.232856846 +0000 UTC m=+16.990146723"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:01.337623     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:01.443757     716 scope.go:117] "RemoveContainer" containerID="3278d9ec1a04122b762c190ba36ec0395d919fb429008753d6b34b37b1db34fb"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:01.444033     716 scope.go:117] "RemoveContainer" containerID="e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	Nov 24 14:01:01 default-k8s-diff-port-098307 kubelet[716]: E1124 14:01:01.444214     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:01:06 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:06.197781     716 scope.go:117] "RemoveContainer" containerID="e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	Nov 24 14:01:06 default-k8s-diff-port-098307 kubelet[716]: E1124 14:01:06.197979     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:01:07 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:07.461595     716 scope.go:117] "RemoveContainer" containerID="da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26"
	Nov 24 14:01:17 default-k8s-diff-port-098307 kubelet[716]: I1124 14:01:17.338153     716 scope.go:117] "RemoveContainer" containerID="e39fdc894045a09bb6f11a1cd48943e615c697f3efb9c41f722ac14993e0f490"
	Nov 24 14:01:17 default-k8s-diff-port-098307 kubelet[716]: E1124 14:01:17.338394     716 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-l5qpb_kubernetes-dashboard(cf3fb153-4e49-4ca7-9df1-98d9cb94e424)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-l5qpb" podUID="cf3fb153-4e49-4ca7-9df1-98d9cb94e424"
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 14:01:22 default-k8s-diff-port-098307 systemd[1]: kubelet.service: Consumed 1.546s CPU time.
	
	
	==> kubernetes-dashboard [90e245ba4cca96d1989f2bb7458706293f79c44d091efd4a6f13c934fa98aac6] <==
	2025/11/24 14:00:47 Starting overwatch
	2025/11/24 14:00:47 Using namespace: kubernetes-dashboard
	2025/11/24 14:00:47 Using in-cluster config to connect to apiserver
	2025/11/24 14:00:47 Using secret token for csrf signing
	2025/11/24 14:00:47 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:00:47 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:00:47 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:00:47 Generating JWE encryption key
	2025/11/24 14:00:47 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:00:47 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:00:48 Initializing JWE encryption key from synchronized object
	2025/11/24 14:00:48 Creating in-cluster Sidecar client
	2025/11/24 14:00:48 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:00:48 Serving insecurely on HTTP port: 9090
	2025/11/24 14:01:18 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [182a4de9d25bca95244c7e52820a2bdddc8b5f8d9db612276fa7dd6a907c37ed] <==
	I1124 14:01:07.507720       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:01:07.515293       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:01:07.515350       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:01:07.517199       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:10.972104       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:15.232632       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:18.831133       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:21.885736       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.907790       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.913004       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:24.913144       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:01:24.913245       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"81dbd621-c84a-49e1-bca9-3457968fc43a", APIVersion:"v1", ResourceVersion:"629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-098307_887e559d-8eae-48c2-b33d-a1eb948081f6 became leader
	I1124 14:01:24.913354       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-098307_887e559d-8eae-48c2-b33d-a1eb948081f6!
	W1124 14:01:24.915168       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.918931       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:25.013619       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-098307_887e559d-8eae-48c2-b33d-a1eb948081f6!
	W1124 14:01:26.921761       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:26.926772       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [da2489f88ad2584c48c9d5c92be242dc901ce15d982e934764f720796e292a26] <==
	I1124 14:00:36.716964       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:01:06.718414       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307: exit status 2 (322.274331ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-456660 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-456660 --alsologtostderr -v=1: exit status 80 (2.408340358s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-456660 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 14:01:32.600366  638395 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:32.600640  638395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:32.600651  638395 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:32.600655  638395 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:32.600880  638395 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:01:32.601177  638395 out.go:368] Setting JSON to false
	I1124 14:01:32.601205  638395 mustload.go:66] Loading cluster: embed-certs-456660
	I1124 14:01:32.601595  638395 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:32.602045  638395 cli_runner.go:164] Run: docker container inspect embed-certs-456660 --format={{.State.Status}}
	I1124 14:01:32.620517  638395 host.go:66] Checking if "embed-certs-456660" exists ...
	I1124 14:01:32.620856  638395 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:32.698913  638395 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:76 OomKillDisable:false NGoroutines:93 SystemTime:2025-11-24 14:01:32.688546283 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:32.699740  638395 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1763503576-21924/minikube-v1.37.0-1763503576-21924-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1763503576-21924-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-456660 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1124 14:01:32.705721  638395 out.go:179] * Pausing node embed-certs-456660 ... 
	I1124 14:01:32.707211  638395 host.go:66] Checking if "embed-certs-456660" exists ...
	I1124 14:01:32.707588  638395 ssh_runner.go:195] Run: systemctl --version
	I1124 14:01:32.707638  638395 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-456660
	I1124 14:01:32.736125  638395 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33483 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/embed-certs-456660/id_rsa Username:docker}
	I1124 14:01:32.855957  638395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:32.868422  638395 pause.go:52] kubelet running: true
	I1124 14:01:32.868489  638395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:33.107729  638395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:33.107838  638395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:33.179394  638395 cri.go:89] found id: "2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d"
	I1124 14:01:33.179423  638395 cri.go:89] found id: "5501ca0c7fb4eb766d1f1267cdd592ef1abb6f036d0c0e6e686f3dfb130ff854"
	I1124 14:01:33.179430  638395 cri.go:89] found id: "87daf6e06706d8d8b44bbb2aa7f0e1165e3bb91aa705936757264cda31996eb4"
	I1124 14:01:33.179435  638395 cri.go:89] found id: "7d104f956282eb0c0892603f25ca5ca1dcbb6e0b3315dd73f7a02f9d43b26a6e"
	I1124 14:01:33.179441  638395 cri.go:89] found id: "874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17"
	I1124 14:01:33.179446  638395 cri.go:89] found id: "4fd82fcf0a95c7ded90099f1ef94b195a1bfbec5996b4c8707133b0ae2e94054"
	I1124 14:01:33.179450  638395 cri.go:89] found id: "c060c8b92a797680bc8311ef0a54ce5bacbba9cdfb27356a2c9ebd54d3f1eba9"
	I1124 14:01:33.179455  638395 cri.go:89] found id: "7a8be54b5dc721d84f31ea8fd1ee274f5d8e338f35ccf6545b4ae1a0ae3390eb"
	I1124 14:01:33.179460  638395 cri.go:89] found id: "9272ef68efbd4d16c91f204260a6c267f366f85059e13af91359474c4768da2f"
	I1124 14:01:33.179470  638395 cri.go:89] found id: "e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	I1124 14:01:33.179475  638395 cri.go:89] found id: "4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f"
	I1124 14:01:33.179490  638395 cri.go:89] found id: ""
	I1124 14:01:33.179553  638395 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:33.192201  638395 retry.go:31] will retry after 349.687522ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:33Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:01:33.542805  638395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:33.555580  638395 pause.go:52] kubelet running: false
	I1124 14:01:33.555643  638395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:33.704021  638395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:33.704108  638395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:33.776191  638395 cri.go:89] found id: "2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d"
	I1124 14:01:33.776215  638395 cri.go:89] found id: "5501ca0c7fb4eb766d1f1267cdd592ef1abb6f036d0c0e6e686f3dfb130ff854"
	I1124 14:01:33.776220  638395 cri.go:89] found id: "87daf6e06706d8d8b44bbb2aa7f0e1165e3bb91aa705936757264cda31996eb4"
	I1124 14:01:33.776223  638395 cri.go:89] found id: "7d104f956282eb0c0892603f25ca5ca1dcbb6e0b3315dd73f7a02f9d43b26a6e"
	I1124 14:01:33.776226  638395 cri.go:89] found id: "874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17"
	I1124 14:01:33.776230  638395 cri.go:89] found id: "4fd82fcf0a95c7ded90099f1ef94b195a1bfbec5996b4c8707133b0ae2e94054"
	I1124 14:01:33.776239  638395 cri.go:89] found id: "c060c8b92a797680bc8311ef0a54ce5bacbba9cdfb27356a2c9ebd54d3f1eba9"
	I1124 14:01:33.776242  638395 cri.go:89] found id: "7a8be54b5dc721d84f31ea8fd1ee274f5d8e338f35ccf6545b4ae1a0ae3390eb"
	I1124 14:01:33.776245  638395 cri.go:89] found id: "9272ef68efbd4d16c91f204260a6c267f366f85059e13af91359474c4768da2f"
	I1124 14:01:33.776251  638395 cri.go:89] found id: "e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	I1124 14:01:33.776254  638395 cri.go:89] found id: "4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f"
	I1124 14:01:33.776256  638395 cri.go:89] found id: ""
	I1124 14:01:33.776294  638395 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:33.788112  638395 retry.go:31] will retry after 533.712404ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:33Z" level=error msg="open /run/runc: no such file or directory"
	I1124 14:01:34.322501  638395 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 14:01:34.337518  638395 pause.go:52] kubelet running: false
	I1124 14:01:34.337576  638395 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1124 14:01:34.504588  638395 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1124 14:01:34.504679  638395 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1124 14:01:34.576593  638395 cri.go:89] found id: "2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d"
	I1124 14:01:34.576619  638395 cri.go:89] found id: "5501ca0c7fb4eb766d1f1267cdd592ef1abb6f036d0c0e6e686f3dfb130ff854"
	I1124 14:01:34.576626  638395 cri.go:89] found id: "87daf6e06706d8d8b44bbb2aa7f0e1165e3bb91aa705936757264cda31996eb4"
	I1124 14:01:34.576631  638395 cri.go:89] found id: "7d104f956282eb0c0892603f25ca5ca1dcbb6e0b3315dd73f7a02f9d43b26a6e"
	I1124 14:01:34.576635  638395 cri.go:89] found id: "874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17"
	I1124 14:01:34.576641  638395 cri.go:89] found id: "4fd82fcf0a95c7ded90099f1ef94b195a1bfbec5996b4c8707133b0ae2e94054"
	I1124 14:01:34.576646  638395 cri.go:89] found id: "c060c8b92a797680bc8311ef0a54ce5bacbba9cdfb27356a2c9ebd54d3f1eba9"
	I1124 14:01:34.576650  638395 cri.go:89] found id: "7a8be54b5dc721d84f31ea8fd1ee274f5d8e338f35ccf6545b4ae1a0ae3390eb"
	I1124 14:01:34.576655  638395 cri.go:89] found id: "9272ef68efbd4d16c91f204260a6c267f366f85059e13af91359474c4768da2f"
	I1124 14:01:34.576680  638395 cri.go:89] found id: "e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	I1124 14:01:34.576688  638395 cri.go:89] found id: "4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f"
	I1124 14:01:34.576692  638395 cri.go:89] found id: ""
	I1124 14:01:34.576742  638395 ssh_runner.go:195] Run: sudo runc list -f json
	I1124 14:01:34.734368  638395 out.go:203] 
	W1124 14:01:34.740705  638395 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:34Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T14:01:34Z" level=error msg="open /run/runc: no such file or directory"
	
	W1124 14:01:34.740730  638395 out.go:285] * 
	* 
	W1124 14:01:34.747776  638395 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1124 14:01:34.867710  638395 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-456660 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-456660
helpers_test.go:243: (dbg) docker inspect embed-certs-456660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73",
	        "Created": "2025-11-24T13:59:02.932884414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622708,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:00:32.51498759Z",
	            "FinishedAt": "2025-11-24T14:00:31.529681915Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/hostname",
	        "HostsPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/hosts",
	        "LogPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73-json.log",
	        "Name": "/embed-certs-456660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-456660:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-456660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73",
	                "LowerDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-456660",
	                "Source": "/var/lib/docker/volumes/embed-certs-456660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-456660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-456660",
	                "name.minikube.sigs.k8s.io": "embed-certs-456660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "aa8d7e52444ea0a3e85a106881e27734e6dc5833b4805d9f7dec8aa1f4025942",
	            "SandboxKey": "/var/run/docker/netns/aa8d7e52444e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-456660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95ddebcd3d89852aa68144f21da1b1af75512bc90f1d459df2c763b06d58452c",
	                    "EndpointID": "77570f1f4b3d1c90a70f970e1ad6379ba210d11befbb19e066bfe6657a7e1d23",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:2e:18:90:38:93",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-456660",
	                        "387e2d09bc80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660: exit status 2 (332.511958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-456660 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-456660 logs -n 25: (1.831020664s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-165759 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                       │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                 │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cri-dockerd --version                                                                                                                          │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status containerd --all --full --no-pager                                                                                            │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p auto-165759 sudo systemctl cat containerd --no-pager                                                                                                            │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo containerd config dump                                                                                                                         │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo crio config                                                                                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ delete  │ -p auto-165759                                                                                                                                                     │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p calico-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p kindnet-165759 pgrep -a kubelet                                                                                                                                 │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ default-k8s-diff-port-098307 image list --format=json                                                                                                              │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ pause   │ -p default-k8s-diff-port-098307 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-098307                                                                                                                                    │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ delete  │ -p default-k8s-diff-port-098307                                                                                                                                    │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p custom-flannel-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-165759        │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ image   │ embed-certs-456660 image list --format=json                                                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ pause   │ -p embed-certs-456660 --alsologtostderr -v=1                                                                                                                       │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p kindnet-165759 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo cat /etc/hosts                                                                                                                              │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:01:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:01:30.977432  637854 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:30.977763  637854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:30.977780  637854 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:30.977786  637854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:30.978150  637854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:01:30.978755  637854 out.go:368] Setting JSON to false
	I1124 14:01:30.980398  637854 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9838,"bootTime":1763983053,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:01:30.980534  637854 start.go:143] virtualization: kvm guest
	I1124 14:01:30.982624  637854 out.go:179] * [custom-flannel-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:01:30.984460  637854 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:01:30.984459  637854 notify.go:221] Checking for updates...
	I1124 14:01:30.987021  637854 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:01:30.992109  637854 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:01:30.993353  637854 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:01:30.994928  637854 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:01:30.996051  637854 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:01:30.997819  637854 config.go:182] Loaded profile config "calico-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:30.997997  637854 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:30.998101  637854 config.go:182] Loaded profile config "kindnet-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:30.998262  637854 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:01:31.031369  637854 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:01:31.031523  637854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:31.102480  637854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 14:01:31.092144911 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:31.102632  637854 docker.go:319] overlay module found
	I1124 14:01:31.107985  637854 out.go:179] * Using the docker driver based on user configuration
	I1124 14:01:29.103918  633029 out.go:252]   - Booting up control plane ...
	I1124 14:01:29.104011  633029 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:01:29.104098  633029 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:01:29.104992  633029 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:01:29.118130  633029 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:01:29.118277  633029 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:01:29.124246  633029 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:01:29.124657  633029 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:01:29.124726  633029 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:01:29.221232  633029 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:01:29.221398  633029 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:01:30.722566  633029 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501453652s
	I1124 14:01:30.725844  633029 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:01:30.725999  633029 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:01:30.726128  633029 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:01:30.726233  633029 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:01:31.109311  637854 start.go:309] selected driver: docker
	I1124 14:01:31.109326  637854 start.go:927] validating driver "docker" against <nil>
	I1124 14:01:31.109352  637854 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:01:31.110109  637854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:31.180834  637854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 14:01:31.16882287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:31.181072  637854 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:01:31.181376  637854 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:01:31.182831  637854 out.go:179] * Using Docker driver with root privileges
	I1124 14:01:31.183791  637854 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1124 14:01:31.183822  637854 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1124 14:01:31.183930  637854 start.go:353] cluster config:
	{Name:custom-flannel-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:01:31.185146  637854 out.go:179] * Starting "custom-flannel-165759" primary control-plane node in "custom-flannel-165759" cluster
	I1124 14:01:31.186115  637854 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:01:31.187124  637854 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:01:31.188040  637854 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:31.188069  637854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:01:31.188080  637854 cache.go:65] Caching tarball of preloaded images
	I1124 14:01:31.188136  637854 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:01:31.188181  637854 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:01:31.188197  637854 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:01:31.188287  637854 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/custom-flannel-165759/config.json ...
	I1124 14:01:31.188303  637854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/custom-flannel-165759/config.json: {Name:mk5ad4316a0c090b07c4c48faf85307b6f1a9bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:31.208985  637854 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:01:31.209003  637854 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:01:31.209017  637854 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:01:31.209051  637854 start.go:360] acquireMachinesLock for custom-flannel-165759: {Name:mk252510b504221947d9e6c5baba930277b39ae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:01:31.209158  637854 start.go:364] duration metric: took 83.644µs to acquireMachinesLock for "custom-flannel-165759"
	I1124 14:01:31.209184  637854 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-165759 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:01:31.209270  637854 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 24 14:00:53 embed-certs-456660 crio[563]: time="2025-11-24T14:00:53.375581856Z" level=info msg="Created container 4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dz7pl/kubernetes-dashboard" id=18fcabb6-7200-4e25-a6e6-760d596aa3f9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:53 embed-certs-456660 crio[563]: time="2025-11-24T14:00:53.376087107Z" level=info msg="Starting container: 4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f" id=74be4958-7336-4181-adcd-eed608c66b1a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:53 embed-certs-456660 crio[563]: time="2025-11-24T14:00:53.377633525Z" level=info msg="Started container" PID=1722 containerID=4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dz7pl/kubernetes-dashboard id=74be4958-7336-4181-adcd-eed608c66b1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=04d5a57ce32d4d68046046a1872326e5bc80d0035b693507e46a3a7752456d56
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.820879854Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da1b30a6-362a-4dd0-9d11-a7ce9562c74d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.823397355Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f84ed377-74b9-4cf3-b2d2-b8c848513a71 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.826555641Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper" id=5cba4364-4b2b-4044-b718-e6229084b632 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.826698552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.832780214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.833310782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.859963102Z" level=info msg="Created container e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper" id=5cba4364-4b2b-4044-b718-e6229084b632 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.860492173Z" level=info msg="Starting container: e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277" id=c7fdff5b-e3cb-447c-be01-2f6f31381d3f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.86235239Z" level=info msg="Started container" PID=1740 containerID=e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper id=c7fdff5b-e3cb-447c-be01-2f6f31381d3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3687e14564819c062c1a2ad838a1b751a37f19fffc1c10264076c195bdd0e5d2
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.938879526Z" level=info msg="Removing container: 99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6" id=9465689b-aac5-452d-a753-629807d69af3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.950427812Z" level=info msg="Removed container 99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper" id=9465689b-aac5-452d-a753-629807d69af3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.950838066Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1d03aca2-447b-4d0a-8323-092661a023e1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.951853242Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61a97c9b-b9b2-4867-a004-00f438d052b3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.952968808Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f222509-6f8a-40e9-82d0-aeb897b75ec9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.953112114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957561741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957700477Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7450cfc0f2deead6aa7f761ca6cf7bf6f79b9a403c67cfd42a96a7921a299ea2/merged/etc/passwd: no such file or directory"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957721856Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7450cfc0f2deead6aa7f761ca6cf7bf6f79b9a403c67cfd42a96a7921a299ea2/merged/etc/group: no such file or directory"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957952042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.987178449Z" level=info msg="Created container 2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d: kube-system/storage-provisioner/storage-provisioner" id=8f222509-6f8a-40e9-82d0-aeb897b75ec9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.987728064Z" level=info msg="Starting container: 2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d" id=87c30714-1d1d-4f57-8e26-685b8fa453ac name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.989527551Z" level=info msg="Started container" PID=1756 containerID=2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d description=kube-system/storage-provisioner/storage-provisioner id=87c30714-1d1d-4f57-8e26-685b8fa453ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=176c671987df90400b41eff8cde4ada333bb3f38cabce3aa9d62d0253d877128
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2fee0c2f6ff9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           23 seconds ago      Running             storage-provisioner         1                   176c671987df9       storage-provisioner                          kube-system
	e8aa71d4c8714       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           28 seconds ago      Exited              dashboard-metrics-scraper   2                   3687e14564819       dashboard-metrics-scraper-6ffb444bf9-fjr27   kubernetes-dashboard
	4a15b0d874fde       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago      Running             kubernetes-dashboard        0                   04d5a57ce32d4       kubernetes-dashboard-855c9754f9-dz7pl        kubernetes-dashboard
	5501ca0c7fb4e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           54 seconds ago      Running             coredns                     0                   6afcd39d366e0       coredns-66bc5c9577-nnp2c                     kube-system
	4369d7b07fb50       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   5769afe839db1       busybox                                      default
	87daf6e06706d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           54 seconds ago      Running             kube-proxy                  0                   add4b82153862       kube-proxy-k5bxk                             kube-system
	7d104f956282e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           54 seconds ago      Running             kindnet-cni                 0                   cec9436a97b7c       kindnet-vlqg6                                kube-system
	874e0893a4626       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           54 seconds ago      Exited              storage-provisioner         0                   176c671987df9       storage-provisioner                          kube-system
	4fd82fcf0a95c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           57 seconds ago      Running             kube-apiserver              0                   5cb793f20a83a       kube-apiserver-embed-certs-456660            kube-system
	c060c8b92a797       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           57 seconds ago      Running             kube-scheduler              0                   87c4da181d0df       kube-scheduler-embed-certs-456660            kube-system
	7a8be54b5dc72       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           57 seconds ago      Running             kube-controller-manager     0                   9e366ad7408f5       kube-controller-manager-embed-certs-456660   kube-system
	9272ef68efbd4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           57 seconds ago      Running             etcd                        0                   a8e44bb36305b       etcd-embed-certs-456660                      kube-system
	
	
	==> coredns [5501ca0c7fb4eb766d1f1267cdd592ef1abb6f036d0c0e6e686f3dfb130ff854] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46602 - 25947 "HINFO IN 4176784439970732156.6497882213263914856. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.123400726s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-456660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-456660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-456660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-456660
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:01:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 14:00:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-456660
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                950f3d12-76ba-49d9-8f39-c1dd2a09eea1
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         92s
	  kube-system                 coredns-66bc5c9577-nnp2c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m15s
	  kube-system                 etcd-embed-certs-456660                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m21s
	  kube-system                 kindnet-vlqg6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m15s
	  kube-system                 kube-apiserver-embed-certs-456660             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-controller-manager-embed-certs-456660    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-proxy-k5bxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kube-system                 kube-scheduler-embed-certs-456660             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m15s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fjr27    0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dz7pl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m14s              kube-proxy       
	  Normal  Starting                 54s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m21s              kubelet          Node embed-certs-456660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s              kubelet          Node embed-certs-456660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s              kubelet          Node embed-certs-456660 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m21s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m16s              node-controller  Node embed-certs-456660 event: Registered Node embed-certs-456660 in Controller
	  Normal  NodeReady                94s                kubelet          Node embed-certs-456660 status is now: NodeReady
	  Normal  Starting                 58s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  58s (x8 over 58s)  kubelet          Node embed-certs-456660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    58s (x8 over 58s)  kubelet          Node embed-certs-456660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     58s (x8 over 58s)  kubelet          Node embed-certs-456660 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                node-controller  Node embed-certs-456660 event: Registered Node embed-certs-456660 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [9272ef68efbd4d16c91f204260a6c267f366f85059e13af91359474c4768da2f] <==
	{"level":"warn","ts":"2025-11-24T14:00:40.977921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:40.990139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.004021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.017673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.027482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.036150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.048294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.055818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.064001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.073053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.082393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.091388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.104431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.114620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.123431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.142218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.151340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.159422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.215728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43886","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T14:00:47.070043Z","caller":"traceutil/trace.go:172","msg":"trace[1144434822] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"140.819373ms","start":"2025-11-24T14:00:46.929204Z","end":"2025-11-24T14:00:47.070023Z","steps":["trace[1144434822] 'process raft request'  (duration: 128.48459ms)","trace[1144434822] 'compare'  (duration: 12.19651ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T14:00:47.182753Z","caller":"traceutil/trace.go:172","msg":"trace[418576795] linearizableReadLoop","detail":"{readStateIndex:552; appliedIndex:552; }","duration":"101.351187ms","start":"2025-11-24T14:00:47.081368Z","end":"2025-11-24T14:00:47.182719Z","steps":["trace[418576795] 'read index received'  (duration: 101.344055ms)","trace[418576795] 'applied index is now lower than readState.Index'  (duration: 6.137µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T14:00:47.183263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.856558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-456660\" limit:1 ","response":"range_response_count:1 size:5708"}
	{"level":"info","ts":"2025-11-24T14:00:47.183308Z","caller":"traceutil/trace.go:172","msg":"trace[239692795] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"104.363259ms","start":"2025-11-24T14:00:47.078920Z","end":"2025-11-24T14:00:47.183283Z","steps":["trace[239692795] 'process raft request'  (duration: 103.830048ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T14:00:47.183325Z","caller":"traceutil/trace.go:172","msg":"trace[715254745] range","detail":"{range_begin:/registry/minions/embed-certs-456660; range_end:; response_count:1; response_revision:519; }","duration":"101.950366ms","start":"2025-11-24T14:00:47.081364Z","end":"2025-11-24T14:00:47.183315Z","steps":["trace[715254745] 'agreement among raft nodes before linearized reading'  (duration: 101.432129ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T14:00:47.666971Z","caller":"traceutil/trace.go:172","msg":"trace[1776998844] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"115.140069ms","start":"2025-11-24T14:00:47.551807Z","end":"2025-11-24T14:00:47.666947Z","steps":["trace[1776998844] 'process raft request'  (duration: 51.673517ms)","trace[1776998844] 'compare'  (duration: 63.093595ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:01:36 up  2:44,  0 user,  load average: 3.92, 3.35, 2.27
	Linux embed-certs-456660 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d104f956282eb0c0892603f25ca5ca1dcbb6e0b3315dd73f7a02f9d43b26a6e] <==
	I1124 14:00:42.495917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:42.496269       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:00:42.496454       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:42.496477       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:42.496493       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:42.696807       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:42.696838       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:42.696859       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:42.697033       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:00:42.997517       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:00:42.997547       1 metrics.go:72] Registering metrics
	I1124 14:00:42.997617       1 controller.go:711] "Syncing nftables rules"
	I1124 14:00:52.696266       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:52.696328       1 main.go:301] handling current node
	I1124 14:01:02.701005       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:02.701106       1 main.go:301] handling current node
	I1124 14:01:12.696288       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:12.696345       1 main.go:301] handling current node
	I1124 14:01:22.698092       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:22.698127       1 main.go:301] handling current node
	I1124 14:01:32.705382       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:32.705419       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fd82fcf0a95c7ded90099f1ef94b195a1bfbec5996b4c8707133b0ae2e94054] <==
	I1124 14:00:41.808519       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:00:41.810281       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:00:41.812702       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:00:41.812811       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:00:41.812831       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:00:41.813314       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:00:41.817770       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:00:41.817796       1 policy_source.go:240] refreshing policies
	I1124 14:00:41.828663       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:00:41.848815       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:41.861144       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:00:41.861200       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:00:41.866528       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:41.968278       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:00:42.266761       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:00:42.309260       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:00:42.336607       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:42.344586       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:42.390050       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.154.45"}
	I1124 14:00:42.402176       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.8.34"}
	I1124 14:00:42.710496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:45.572986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:45.573042       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:45.670988       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:45.773670       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7a8be54b5dc721d84f31ea8fd1ee274f5d8e338f35ccf6545b4ae1a0ae3390eb] <==
	I1124 14:00:45.201357       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:00:45.202534       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:00:45.216936       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:00:45.216964       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:00:45.217145       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:00:45.217236       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-456660"
	I1124 14:00:45.216978       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 14:00:45.217317       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:00:45.217371       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:00:45.216978       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:00:45.217358       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:00:45.217334       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:00:45.217434       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:00:45.217554       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:00:45.217745       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:00:45.217800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:00:45.218448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:00:45.219621       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:00:45.222952       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:00:45.224172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:45.225272       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:45.225287       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:00:45.226428       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:00:45.233741       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:00:45.244084       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [87daf6e06706d8d8b44bbb2aa7f0e1165e3bb91aa705936757264cda31996eb4] <==
	I1124 14:00:42.317959       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:00:42.385299       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:42.485756       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:42.485790       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:00:42.485881       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:42.510620       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:42.510676       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:42.515860       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:42.516424       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:42.516665       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:42.518766       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:42.518929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:42.519861       1 config.go:200] "Starting service config controller"
	I1124 14:00:42.519932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:42.520121       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:42.520147       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:42.520190       1 config.go:309] "Starting node config controller"
	I1124 14:00:42.520206       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:42.620337       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 14:00:42.620379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:42.620350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:42.622036       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c060c8b92a797680bc8311ef0a54ce5bacbba9cdfb27356a2c9ebd54d3f1eba9] <==
	I1124 14:00:40.402318       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:00:41.834873       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:00:41.834921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:41.842129       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:00:41.842238       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:00:41.846564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:41.846591       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:41.846966       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:41.846979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:41.850425       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:00:41.850994       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:00:41.947101       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:41.947296       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:41.951589       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 24 14:00:45 embed-certs-456660 kubelet[725]: I1124 14:00:45.789389     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwn9\" (UniqueName: \"kubernetes.io/projected/077f51d0-5205-40db-a330-74520645fac9-kube-api-access-tdwn9\") pod \"dashboard-metrics-scraper-6ffb444bf9-fjr27\" (UID: \"077f51d0-5205-40db-a330-74520645fac9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27"
	Nov 24 14:00:48 embed-certs-456660 kubelet[725]: I1124 14:00:48.558290     725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 14:00:49 embed-certs-456660 kubelet[725]: I1124 14:00:49.885489     725 scope.go:117] "RemoveContainer" containerID="8a14c90ac13f97ff67e3b85602e9eb08bfef67fd48948ffa74bd4d0ca8e7b604"
	Nov 24 14:00:50 embed-certs-456660 kubelet[725]: I1124 14:00:50.890680     725 scope.go:117] "RemoveContainer" containerID="8a14c90ac13f97ff67e3b85602e9eb08bfef67fd48948ffa74bd4d0ca8e7b604"
	Nov 24 14:00:50 embed-certs-456660 kubelet[725]: I1124 14:00:50.890928     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:00:50 embed-certs-456660 kubelet[725]: E1124 14:00:50.892078     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:00:51 embed-certs-456660 kubelet[725]: I1124 14:00:51.895361     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:00:51 embed-certs-456660 kubelet[725]: E1124 14:00:51.895614     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:00:52 embed-certs-456660 kubelet[725]: I1124 14:00:52.898379     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:00:52 embed-certs-456660 kubelet[725]: E1124 14:00:52.899099     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:00:53 embed-certs-456660 kubelet[725]: I1124 14:00:53.911774     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dz7pl" podStartSLOduration=1.5581080360000001 podStartE2EDuration="8.911751879s" podCreationTimestamp="2025-11-24 14:00:45 +0000 UTC" firstStartedPulling="2025-11-24 14:00:45.985395373 +0000 UTC m=+7.279857221" lastFinishedPulling="2025-11-24 14:00:53.339039218 +0000 UTC m=+14.633501064" observedRunningTime="2025-11-24 14:00:53.911623802 +0000 UTC m=+15.206085667" watchObservedRunningTime="2025-11-24 14:00:53.911751879 +0000 UTC m=+15.206213745"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: I1124 14:01:07.820272     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: I1124 14:01:07.936923     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: I1124 14:01:07.937138     725 scope.go:117] "RemoveContainer" containerID="e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: E1124 14:01:07.937341     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:01:11 embed-certs-456660 kubelet[725]: I1124 14:01:11.566655     725 scope.go:117] "RemoveContainer" containerID="e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	Nov 24 14:01:11 embed-certs-456660 kubelet[725]: E1124 14:01:11.566875     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:01:12 embed-certs-456660 kubelet[725]: I1124 14:01:12.950469     725 scope.go:117] "RemoveContainer" containerID="874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17"
	Nov 24 14:01:22 embed-certs-456660 kubelet[725]: I1124 14:01:22.820169     725 scope.go:117] "RemoveContainer" containerID="e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	Nov 24 14:01:22 embed-certs-456660 kubelet[725]: E1124 14:01:22.820377     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:01:33 embed-certs-456660 kubelet[725]: I1124 14:01:33.079027     725 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: kubelet.service: Consumed 1.625s CPU time.
	
	
	==> kubernetes-dashboard [4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f] <==
	2025/11/24 14:00:53 Using namespace: kubernetes-dashboard
	2025/11/24 14:00:53 Using in-cluster config to connect to apiserver
	2025/11/24 14:00:53 Using secret token for csrf signing
	2025/11/24 14:00:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:00:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:00:53 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:00:53 Generating JWE encryption key
	2025/11/24 14:00:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:00:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:00:53 Initializing JWE encryption key from synchronized object
	2025/11/24 14:00:53 Creating in-cluster Sidecar client
	2025/11/24 14:00:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:00:53 Serving insecurely on HTTP port: 9090
	2025/11/24 14:01:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:00:53 Starting overwatch
	
	
	==> storage-provisioner [2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d] <==
	I1124 14:01:13.001060       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:01:13.008099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:01:13.008130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:01:13.009995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:16.464512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:20.724728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.325426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:27.379524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:30.401217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:30.414151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:30.414336       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:01:30.414463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-456660_9ac06b7c-1fa1-4774-a265-238670bd5e4a!
	I1124 14:01:30.414453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"651acb2c-b76c-4715-850b-34431f20fd28", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-456660_9ac06b7c-1fa1-4774-a265-238670bd5e4a became leader
	W1124 14:01:30.417190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:30.420179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:30.514763       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-456660_9ac06b7c-1fa1-4774-a265-238670bd5e4a!
	W1124 14:01:32.423323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:32.428431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:34.431463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:34.437711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:36.440206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:36.492615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17] <==
	I1124 14:00:42.247166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:01:12.250257       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-456660 -n embed-certs-456660
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-456660 -n embed-certs-456660: exit status 2 (449.176419ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-456660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-456660
helpers_test.go:243: (dbg) docker inspect embed-certs-456660:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73",
	        "Created": "2025-11-24T13:59:02.932884414Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 622708,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-24T14:00:32.51498759Z",
	            "FinishedAt": "2025-11-24T14:00:31.529681915Z"
	        },
	        "Image": "sha256:133ca4ac39008d0056ad45d8cb70521d6b70d6e1b8bbff4678fd4b354efbdf70",
	        "ResolvConfPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/hostname",
	        "HostsPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/hosts",
	        "LogPath": "/var/lib/docker/containers/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73/387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73-json.log",
	        "Name": "/embed-certs-456660",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "embed-certs-456660:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-456660",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": null,
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "387e2d09bc80bb668bbaa3f0cedcf60d4cec831023dd4ebe53def163801cea73",
	                "LowerDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed-init/diff:/var/lib/docker/overlay2/b17d6205cf290186b389ac7c1255d7274fea54ef27df9ff8755bddd2d25eb638/diff",
	                "MergedDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/merged",
	                "UpperDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/diff",
	                "WorkDir": "/var/lib/docker/overlay2/8be6e07f832a00279236a6de030345420fe4432951998b924d1c7aacc8f058ed/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "embed-certs-456660",
	                "Source": "/var/lib/docker/volumes/embed-certs-456660/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-456660",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-456660",
	                "name.minikube.sigs.k8s.io": "embed-certs-456660",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "SandboxID": "aa8d7e52444ea0a3e85a106881e27734e6dc5833b4805d9f7dec8aa1f4025942",
	            "SandboxKey": "/var/run/docker/netns/aa8d7e52444e",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33483"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33484"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33487"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33485"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33486"
	                    }
	                ]
	            },
	            "Networks": {
	                "embed-certs-456660": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2",
	                        "IPv6Address": ""
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "95ddebcd3d89852aa68144f21da1b1af75512bc90f1d459df2c763b06d58452c",
	                    "EndpointID": "77570f1f4b3d1c90a70f970e1ad6379ba210d11befbb19e066bfe6657a7e1d23",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "MacAddress": "d2:2e:18:90:38:93",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-456660",
	                        "387e2d09bc80"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660: exit status 2 (349.74411ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-456660 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-456660 logs -n 25: (1.339826076s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                ARGS                                                                                │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p auto-165759 sudo cat /lib/systemd/system/containerd.service                                                                                                     │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo cat /etc/containerd/config.toml                                                                                                                │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo containerd config dump                                                                                                                         │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl status crio --all --full --no-pager                                                                                                  │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo systemctl cat crio --no-pager                                                                                                                  │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                        │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p auto-165759 sudo crio config                                                                                                                                    │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ delete  │ -p auto-165759                                                                                                                                                     │ auto-165759                  │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p calico-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio                             │ calico-165759                │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p kindnet-165759 pgrep -a kubelet                                                                                                                                 │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ image   │ default-k8s-diff-port-098307 image list --format=json                                                                                                              │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ pause   │ -p default-k8s-diff-port-098307 --alsologtostderr -v=1                                                                                                             │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ delete  │ -p default-k8s-diff-port-098307                                                                                                                                    │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ delete  │ -p default-k8s-diff-port-098307                                                                                                                                    │ default-k8s-diff-port-098307 │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ start   │ -p custom-flannel-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio │ custom-flannel-165759        │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ image   │ embed-certs-456660 image list --format=json                                                                                                                        │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ pause   │ -p embed-certs-456660 --alsologtostderr -v=1                                                                                                                       │ embed-certs-456660           │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	│ ssh     │ -p kindnet-165759 sudo cat /etc/nsswitch.conf                                                                                                                      │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo cat /etc/hosts                                                                                                                              │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo cat /etc/resolv.conf                                                                                                                        │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo crictl pods                                                                                                                                 │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo crictl ps --all                                                                                                                             │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo find /etc/cni -type f -exec sh -c 'echo {}; cat {}' \;                                                                                      │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo ip a s                                                                                                                                      │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │ 24 Nov 25 14:01 UTC │
	│ ssh     │ -p kindnet-165759 sudo ip r s                                                                                                                                      │ kindnet-165759               │ jenkins │ v1.37.0 │ 24 Nov 25 14:01 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 14:01:30
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 14:01:30.977432  637854 out.go:360] Setting OutFile to fd 1 ...
	I1124 14:01:30.977763  637854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:30.977780  637854 out.go:374] Setting ErrFile to fd 2...
	I1124 14:01:30.977786  637854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 14:01:30.978150  637854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 14:01:30.978755  637854 out.go:368] Setting JSON to false
	I1124 14:01:30.980398  637854 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9838,"bootTime":1763983053,"procs":310,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 14:01:30.980534  637854 start.go:143] virtualization: kvm guest
	I1124 14:01:30.982624  637854 out.go:179] * [custom-flannel-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 14:01:30.984460  637854 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 14:01:30.984459  637854 notify.go:221] Checking for updates...
	I1124 14:01:30.987021  637854 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 14:01:30.992109  637854 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 14:01:30.993353  637854 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 14:01:30.994928  637854 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 14:01:30.996051  637854 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 14:01:30.997819  637854 config.go:182] Loaded profile config "calico-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:30.997997  637854 config.go:182] Loaded profile config "embed-certs-456660": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:30.998101  637854 config.go:182] Loaded profile config "kindnet-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 14:01:30.998262  637854 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 14:01:31.031369  637854 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 14:01:31.031523  637854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:31.102480  637854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 14:01:31.092144911 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:31.102632  637854 docker.go:319] overlay module found
	I1124 14:01:31.107985  637854 out.go:179] * Using the docker driver based on user configuration
	I1124 14:01:29.103918  633029 out.go:252]   - Booting up control plane ...
	I1124 14:01:29.104011  633029 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1124 14:01:29.104098  633029 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1124 14:01:29.104992  633029 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1124 14:01:29.118130  633029 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1124 14:01:29.118277  633029 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1124 14:01:29.124246  633029 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1124 14:01:29.124657  633029 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1124 14:01:29.124726  633029 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1124 14:01:29.221232  633029 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1124 14:01:29.221398  633029 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1124 14:01:30.722566  633029 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.501453652s
	I1124 14:01:30.725844  633029 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1124 14:01:30.725999  633029 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8443/livez
	I1124 14:01:30.726128  633029 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1124 14:01:30.726233  633029 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1124 14:01:31.109311  637854 start.go:309] selected driver: docker
	I1124 14:01:31.109326  637854 start.go:927] validating driver "docker" against <nil>
	I1124 14:01:31.109352  637854 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 14:01:31.110109  637854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 14:01:31.180834  637854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:76 SystemTime:2025-11-24 14:01:31.16882287 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 14:01:31.181072  637854 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 14:01:31.181376  637854 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1124 14:01:31.182831  637854 out.go:179] * Using Docker driver with root privileges
	I1124 14:01:31.183791  637854 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I1124 14:01:31.183822  637854 start_flags.go:336] Found "testdata/kube-flannel.yaml" CNI - setting NetworkPlugin=cni
	I1124 14:01:31.183930  637854 start.go:353] cluster config:
	{Name:custom-flannel-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-165759 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath:
StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 14:01:31.185146  637854 out.go:179] * Starting "custom-flannel-165759" primary control-plane node in "custom-flannel-165759" cluster
	I1124 14:01:31.186115  637854 cache.go:134] Beginning downloading kic base image for docker with crio
	I1124 14:01:31.187124  637854 out.go:179] * Pulling base image v0.0.48-1763789673-21948 ...
	I1124 14:01:31.188040  637854 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:31.188069  637854 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1124 14:01:31.188080  637854 cache.go:65] Caching tarball of preloaded images
	I1124 14:01:31.188136  637854 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon
	I1124 14:01:31.188181  637854 preload.go:238] Found /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1124 14:01:31.188197  637854 cache.go:68] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1124 14:01:31.188287  637854 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/custom-flannel-165759/config.json ...
	I1124 14:01:31.188303  637854 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/custom-flannel-165759/config.json: {Name:mk5ad4316a0c090b07c4c48faf85307b6f1a9bbf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1124 14:01:31.208985  637854 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f in local docker daemon, skipping pull
	I1124 14:01:31.209003  637854 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f exists in daemon, skipping load
	I1124 14:01:31.209017  637854 cache.go:240] Successfully downloaded all kic artifacts
	I1124 14:01:31.209051  637854 start.go:360] acquireMachinesLock for custom-flannel-165759: {Name:mk252510b504221947d9e6c5baba930277b39ae5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1124 14:01:31.209158  637854 start.go:364] duration metric: took 83.644µs to acquireMachinesLock for "custom-flannel-165759"
	I1124 14:01:31.209184  637854 start.go:93] Provisioning new machine with config: &{Name:custom-flannel-165759 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:custom-flannel-165759 Namespace:default APIServerHAVIP: A
PIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:testdata/kube-flannel.yaml} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false Disab
leCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1124 14:01:31.209270  637854 start.go:125] createHost starting for "" (driver="docker")
	I1124 14:01:31.210771  637854 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1124 14:01:31.211011  637854 start.go:159] libmachine.API.Create for "custom-flannel-165759" (driver="docker")
	I1124 14:01:31.211044  637854 client.go:173] LocalClient.Create starting
	I1124 14:01:31.211117  637854 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/ca.pem
	I1124 14:01:31.211157  637854 main.go:143] libmachine: Decoding PEM data...
	I1124 14:01:31.211180  637854 main.go:143] libmachine: Parsing certificate...
	I1124 14:01:31.211254  637854 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21932-348000/.minikube/certs/cert.pem
	I1124 14:01:31.211289  637854 main.go:143] libmachine: Decoding PEM data...
	I1124 14:01:31.211308  637854 main.go:143] libmachine: Parsing certificate...
	I1124 14:01:31.211742  637854 cli_runner.go:164] Run: docker network inspect custom-flannel-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1124 14:01:31.236692  637854 cli_runner.go:211] docker network inspect custom-flannel-165759 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1124 14:01:31.236784  637854 network_create.go:284] running [docker network inspect custom-flannel-165759] to gather additional debugging logs...
	I1124 14:01:31.236805  637854 cli_runner.go:164] Run: docker network inspect custom-flannel-165759
	W1124 14:01:31.256348  637854 cli_runner.go:211] docker network inspect custom-flannel-165759 returned with exit code 1
	I1124 14:01:31.256372  637854 network_create.go:287] error running [docker network inspect custom-flannel-165759]: docker network inspect custom-flannel-165759: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network custom-flannel-165759 not found
	I1124 14:01:31.256393  637854 network_create.go:289] output of [docker network inspect custom-flannel-165759]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network custom-flannel-165759 not found
	
	** /stderr **
	I1124 14:01:31.256508  637854 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1124 14:01:31.275938  637854 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
	I1124 14:01:31.276879  637854 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-e3a6280986d1 IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:92:e6:88:24:ba:69} reservation:<nil>}
	I1124 14:01:31.277457  637854 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-e4f79d672777 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:32:e2:7c:23:0e:27} reservation:<nil>}
	I1124 14:01:31.279952  637854 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3c6d31fc521d IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:7a:d0:3c:9a:90:9e} reservation:<nil>}
	I1124 14:01:31.280546  637854 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-95ddebcd3d89 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:ba:53:32:2f:bb:ed} reservation:<nil>}
	I1124 14:01:31.281093  637854 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-6aefb6d34233 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:82:91:09:77:03:5e} reservation:<nil>}
	I1124 14:01:31.281775  637854 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001e580d0}
	I1124 14:01:31.281800  637854 network_create.go:124] attempt to create docker network custom-flannel-165759 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1124 14:01:31.281853  637854 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=custom-flannel-165759 custom-flannel-165759
	I1124 14:01:31.332298  637854 network_create.go:108] docker network custom-flannel-165759 192.168.103.0/24 created
	I1124 14:01:31.332344  637854 kic.go:121] calculated static IP "192.168.103.2" for the "custom-flannel-165759" container
	I1124 14:01:31.332430  637854 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1124 14:01:31.350107  637854 cli_runner.go:164] Run: docker volume create custom-flannel-165759 --label name.minikube.sigs.k8s.io=custom-flannel-165759 --label created_by.minikube.sigs.k8s.io=true
	I1124 14:01:31.368612  637854 oci.go:103] Successfully created a docker volume custom-flannel-165759
	I1124 14:01:31.368690  637854 cli_runner.go:164] Run: docker run --rm --name custom-flannel-165759-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=custom-flannel-165759 --entrypoint /usr/bin/test -v custom-flannel-165759:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -d /var/lib
	I1124 14:01:31.784225  637854 oci.go:107] Successfully prepared a docker volume custom-flannel-165759
	I1124 14:01:31.784295  637854 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1124 14:01:31.784309  637854 kic.go:194] Starting extracting preloaded images to volume ...
	I1124 14:01:31.784410  637854 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v custom-flannel-165759:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f -I lz4 -xf /preloaded.tar -C /extractDir
	I1124 14:01:32.357093  633029 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.63104804s
	I1124 14:01:32.854168  633029 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.128200898s
	I1124 14:01:36.728415  633029 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 6.00243219s
	I1124 14:01:36.741087  633029 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1124 14:01:36.757851  633029 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1124 14:01:36.770389  633029 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1124 14:01:36.770636  633029 kubeadm.go:319] [mark-control-plane] Marking the node calico-165759 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1124 14:01:36.780957  633029 kubeadm.go:319] [bootstrap-token] Using token: pnqexg.dusful2zs44107pr
	
	
	==> CRI-O <==
	Nov 24 14:00:53 embed-certs-456660 crio[563]: time="2025-11-24T14:00:53.375581856Z" level=info msg="Created container 4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f: kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dz7pl/kubernetes-dashboard" id=18fcabb6-7200-4e25-a6e6-760d596aa3f9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:00:53 embed-certs-456660 crio[563]: time="2025-11-24T14:00:53.376087107Z" level=info msg="Starting container: 4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f" id=74be4958-7336-4181-adcd-eed608c66b1a name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:00:53 embed-certs-456660 crio[563]: time="2025-11-24T14:00:53.377633525Z" level=info msg="Started container" PID=1722 containerID=4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f description=kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dz7pl/kubernetes-dashboard id=74be4958-7336-4181-adcd-eed608c66b1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=04d5a57ce32d4d68046046a1872326e5bc80d0035b693507e46a3a7752456d56
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.820879854Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=da1b30a6-362a-4dd0-9d11-a7ce9562c74d name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.823397355Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=f84ed377-74b9-4cf3-b2d2-b8c848513a71 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.826555641Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper" id=5cba4364-4b2b-4044-b718-e6229084b632 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.826698552Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.832780214Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.833310782Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.859963102Z" level=info msg="Created container e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper" id=5cba4364-4b2b-4044-b718-e6229084b632 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.860492173Z" level=info msg="Starting container: e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277" id=c7fdff5b-e3cb-447c-be01-2f6f31381d3f name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.86235239Z" level=info msg="Started container" PID=1740 containerID=e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277 description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper id=c7fdff5b-e3cb-447c-be01-2f6f31381d3f name=/runtime.v1.RuntimeService/StartContainer sandboxID=3687e14564819c062c1a2ad838a1b751a37f19fffc1c10264076c195bdd0e5d2
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.938879526Z" level=info msg="Removing container: 99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6" id=9465689b-aac5-452d-a753-629807d69af3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:01:07 embed-certs-456660 crio[563]: time="2025-11-24T14:01:07.950427812Z" level=info msg="Removed container 99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27/dashboard-metrics-scraper" id=9465689b-aac5-452d-a753-629807d69af3 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.950838066Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=1d03aca2-447b-4d0a-8323-092661a023e1 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.951853242Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=61a97c9b-b9b2-4867-a004-00f438d052b3 name=/runtime.v1.ImageService/ImageStatus
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.952968808Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=8f222509-6f8a-40e9-82d0-aeb897b75ec9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.953112114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957561741Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957700477Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7450cfc0f2deead6aa7f761ca6cf7bf6f79b9a403c67cfd42a96a7921a299ea2/merged/etc/passwd: no such file or directory"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957721856Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7450cfc0f2deead6aa7f761ca6cf7bf6f79b9a403c67cfd42a96a7921a299ea2/merged/etc/group: no such file or directory"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.957952042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.987178449Z" level=info msg="Created container 2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d: kube-system/storage-provisioner/storage-provisioner" id=8f222509-6f8a-40e9-82d0-aeb897b75ec9 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.987728064Z" level=info msg="Starting container: 2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d" id=87c30714-1d1d-4f57-8e26-685b8fa453ac name=/runtime.v1.RuntimeService/StartContainer
	Nov 24 14:01:12 embed-certs-456660 crio[563]: time="2025-11-24T14:01:12.989527551Z" level=info msg="Started container" PID=1756 containerID=2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d description=kube-system/storage-provisioner/storage-provisioner id=87c30714-1d1d-4f57-8e26-685b8fa453ac name=/runtime.v1.RuntimeService/StartContainer sandboxID=176c671987df90400b41eff8cde4ada333bb3f38cabce3aa9d62d0253d877128
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	2fee0c2f6ff9f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   176c671987df9       storage-provisioner                          kube-system
	e8aa71d4c8714       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   3687e14564819       dashboard-metrics-scraper-6ffb444bf9-fjr27   kubernetes-dashboard
	4a15b0d874fde       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   04d5a57ce32d4       kubernetes-dashboard-855c9754f9-dz7pl        kubernetes-dashboard
	5501ca0c7fb4e       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           56 seconds ago      Running             coredns                     0                   6afcd39d366e0       coredns-66bc5c9577-nnp2c                     kube-system
	4369d7b07fb50       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   5769afe839db1       busybox                                      default
	87daf6e06706d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           56 seconds ago      Running             kube-proxy                  0                   add4b82153862       kube-proxy-k5bxk                             kube-system
	7d104f956282e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   cec9436a97b7c       kindnet-vlqg6                                kube-system
	874e0893a4626       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   176c671987df9       storage-provisioner                          kube-system
	4fd82fcf0a95c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           59 seconds ago      Running             kube-apiserver              0                   5cb793f20a83a       kube-apiserver-embed-certs-456660            kube-system
	c060c8b92a797       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           59 seconds ago      Running             kube-scheduler              0                   87c4da181d0df       kube-scheduler-embed-certs-456660            kube-system
	7a8be54b5dc72       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           59 seconds ago      Running             kube-controller-manager     0                   9e366ad7408f5       kube-controller-manager-embed-certs-456660   kube-system
	9272ef68efbd4       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           59 seconds ago      Running             etcd                        0                   a8e44bb36305b       etcd-embed-certs-456660                      kube-system
	
	
	==> coredns [5501ca0c7fb4eb766d1f1267cdd592ef1abb6f036d0c0e6e686f3dfb130ff854] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46602 - 25947 "HINFO IN 4176784439970732156.6497882213263914856. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.123400726s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-456660
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-456660
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=b5d1c9f4e75f4e638a533695fd62619949cefcab
	                    minikube.k8s.io/name=embed-certs-456660
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_24T13_59_16_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 24 Nov 2025 13:59:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-456660
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 24 Nov 2025 14:01:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 13:59:11 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 24 Nov 2025 14:01:12 +0000   Mon, 24 Nov 2025 14:00:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-456660
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863348Ki
	  pods:               110
	System Info:
	  Machine ID:                 9629f1d5bc1ed524a56ce23c69214c09
	  System UUID:                950f3d12-76ba-49d9-8f39-c1dd2a09eea1
	  Boot ID:                    9a34d64a-eb17-4892-9c0b-855837aec864
	  Kernel Version:             6.8.0-1044-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.2
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-nnp2c                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     2m17s
	  kube-system                 etcd-embed-certs-456660                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m23s
	  kube-system                 kindnet-vlqg6                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2m17s
	  kube-system                 kube-apiserver-embed-certs-456660             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-controller-manager-embed-certs-456660    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 kube-proxy-k5bxk                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kube-system                 kube-scheduler-embed-certs-456660             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m23s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m17s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-fjr27    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-dz7pl         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 2m16s              kube-proxy       
	  Normal  Starting                 56s                kube-proxy       
	  Normal  NodeHasSufficientMemory  2m23s              kubelet          Node embed-certs-456660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s              kubelet          Node embed-certs-456660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s              kubelet          Node embed-certs-456660 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m23s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           2m18s              node-controller  Node embed-certs-456660 event: Registered Node embed-certs-456660 in Controller
	  Normal  NodeReady                96s                kubelet          Node embed-certs-456660 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-456660 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-456660 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-456660 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                node-controller  Node embed-certs-456660 event: Registered Node embed-certs-456660 in Controller
	
	
	==> dmesg <==
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6a c8 62 0b 56 43 08 06
	[  +0.000389] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000005] ll header: 00000000: ff ff ff ff ff ff 7e 1c 4f d3 0f 6c 08 06
	[Nov24 13:16] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.054353] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023912] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000020] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023872] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023897] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +1.023891] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +2.047768] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +4.031637] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[  +8.191144] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[ +16.382308] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	[Nov24 13:17] IPv4: martian source 10.244.0.21 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: c2 70 04 92 3a 0c a6 2e de 62 6d 8e 08 00
	
	
	==> etcd [9272ef68efbd4d16c91f204260a6c267f366f85059e13af91359474c4768da2f] <==
	{"level":"warn","ts":"2025-11-24T14:00:40.977921Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:40.990139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.004021Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.017673Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.027482Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.036150Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.048294Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43674","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.055818Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43704","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.064001Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.073053Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43738","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.082393Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43748","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.091388Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43752","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.104431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43776","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.114620Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43790","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.123431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.142218Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43834","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.151340Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.159422Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43858","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-24T14:00:41.215728Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43886","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-24T14:00:47.070043Z","caller":"traceutil/trace.go:172","msg":"trace[1144434822] transaction","detail":"{read_only:false; response_revision:519; number_of_response:1; }","duration":"140.819373ms","start":"2025-11-24T14:00:46.929204Z","end":"2025-11-24T14:00:47.070023Z","steps":["trace[1144434822] 'process raft request'  (duration: 128.48459ms)","trace[1144434822] 'compare'  (duration: 12.19651ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-24T14:00:47.182753Z","caller":"traceutil/trace.go:172","msg":"trace[418576795] linearizableReadLoop","detail":"{readStateIndex:552; appliedIndex:552; }","duration":"101.351187ms","start":"2025-11-24T14:00:47.081368Z","end":"2025-11-24T14:00:47.182719Z","steps":["trace[418576795] 'read index received'  (duration: 101.344055ms)","trace[418576795] 'applied index is now lower than readState.Index'  (duration: 6.137µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-24T14:00:47.183263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"101.856558ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/embed-certs-456660\" limit:1 ","response":"range_response_count:1 size:5708"}
	{"level":"info","ts":"2025-11-24T14:00:47.183308Z","caller":"traceutil/trace.go:172","msg":"trace[239692795] transaction","detail":"{read_only:false; response_revision:520; number_of_response:1; }","duration":"104.363259ms","start":"2025-11-24T14:00:47.078920Z","end":"2025-11-24T14:00:47.183283Z","steps":["trace[239692795] 'process raft request'  (duration: 103.830048ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T14:00:47.183325Z","caller":"traceutil/trace.go:172","msg":"trace[715254745] range","detail":"{range_begin:/registry/minions/embed-certs-456660; range_end:; response_count:1; response_revision:519; }","duration":"101.950366ms","start":"2025-11-24T14:00:47.081364Z","end":"2025-11-24T14:00:47.183315Z","steps":["trace[715254745] 'agreement among raft nodes before linearized reading'  (duration: 101.432129ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-24T14:00:47.666971Z","caller":"traceutil/trace.go:172","msg":"trace[1776998844] transaction","detail":"{read_only:false; response_revision:537; number_of_response:1; }","duration":"115.140069ms","start":"2025-11-24T14:00:47.551807Z","end":"2025-11-24T14:00:47.666947Z","steps":["trace[1776998844] 'process raft request'  (duration: 51.673517ms)","trace[1776998844] 'compare'  (duration: 63.093595ms)"],"step_count":2}
	
	
	==> kernel <==
	 14:01:39 up  2:44,  0 user,  load average: 3.92, 3.35, 2.27
	Linux embed-certs-456660 6.8.0-1044-gcp #47~22.04.1-Ubuntu SMP Thu Oct 23 21:07:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7d104f956282eb0c0892603f25ca5ca1dcbb6e0b3315dd73f7a02f9d43b26a6e] <==
	I1124 14:00:42.495917       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1124 14:00:42.496269       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1124 14:00:42.496454       1 main.go:148] setting mtu 1500 for CNI 
	I1124 14:00:42.496477       1 main.go:178] kindnetd IP family: "ipv4"
	I1124 14:00:42.496493       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-24T14:00:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1124 14:00:42.696807       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1124 14:00:42.696838       1 controller.go:381] "Waiting for informer caches to sync"
	I1124 14:00:42.696859       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1124 14:00:42.697033       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1124 14:00:42.997517       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1124 14:00:42.997547       1 metrics.go:72] Registering metrics
	I1124 14:00:42.997617       1 controller.go:711] "Syncing nftables rules"
	I1124 14:00:52.696266       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:00:52.696328       1 main.go:301] handling current node
	I1124 14:01:02.701005       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:02.701106       1 main.go:301] handling current node
	I1124 14:01:12.696288       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:12.696345       1 main.go:301] handling current node
	I1124 14:01:22.698092       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:22.698127       1 main.go:301] handling current node
	I1124 14:01:32.705382       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1124 14:01:32.705419       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4fd82fcf0a95c7ded90099f1ef94b195a1bfbec5996b4c8707133b0ae2e94054] <==
	I1124 14:00:41.808519       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1124 14:00:41.810281       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1124 14:00:41.812702       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1124 14:00:41.812811       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1124 14:00:41.812831       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1124 14:00:41.813314       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1124 14:00:41.817770       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1124 14:00:41.817796       1 policy_source.go:240] refreshing policies
	I1124 14:00:41.828663       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1124 14:00:41.848815       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1124 14:00:41.861144       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1124 14:00:41.861200       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1124 14:00:41.866528       1 cache.go:39] Caches are synced for autoregister controller
	I1124 14:00:41.968278       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1124 14:00:42.266761       1 controller.go:667] quota admission added evaluator for: namespaces
	I1124 14:00:42.309260       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1124 14:00:42.336607       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1124 14:00:42.344586       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1124 14:00:42.390050       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.97.154.45"}
	I1124 14:00:42.402176       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.8.34"}
	I1124 14:00:42.710496       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1124 14:00:45.572986       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:45.573042       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1124 14:00:45.670988       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1124 14:00:45.773670       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [7a8be54b5dc721d84f31ea8fd1ee274f5d8e338f35ccf6545b4ae1a0ae3390eb] <==
	I1124 14:00:45.201357       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1124 14:00:45.202534       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1124 14:00:45.216936       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1124 14:00:45.216964       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1124 14:00:45.217145       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1124 14:00:45.217236       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-456660"
	I1124 14:00:45.216978       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1124 14:00:45.217317       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1124 14:00:45.217371       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1124 14:00:45.216978       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1124 14:00:45.217358       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1124 14:00:45.217334       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1124 14:00:45.217434       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1124 14:00:45.217554       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1124 14:00:45.217745       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1124 14:00:45.217800       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1124 14:00:45.218448       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1124 14:00:45.219621       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1124 14:00:45.222952       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1124 14:00:45.224172       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:45.225272       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1124 14:00:45.225287       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1124 14:00:45.226428       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1124 14:00:45.233741       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1124 14:00:45.244084       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [87daf6e06706d8d8b44bbb2aa7f0e1165e3bb91aa705936757264cda31996eb4] <==
	I1124 14:00:42.317959       1 server_linux.go:53] "Using iptables proxy"
	I1124 14:00:42.385299       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1124 14:00:42.485756       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1124 14:00:42.485790       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1124 14:00:42.485881       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1124 14:00:42.510620       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1124 14:00:42.510676       1 server_linux.go:132] "Using iptables Proxier"
	I1124 14:00:42.515860       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1124 14:00:42.516424       1 server.go:527] "Version info" version="v1.34.1"
	I1124 14:00:42.516665       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:42.518766       1 config.go:403] "Starting serviceCIDR config controller"
	I1124 14:00:42.518929       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1124 14:00:42.519861       1 config.go:200] "Starting service config controller"
	I1124 14:00:42.519932       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1124 14:00:42.520121       1 config.go:106] "Starting endpoint slice config controller"
	I1124 14:00:42.520147       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1124 14:00:42.520190       1 config.go:309] "Starting node config controller"
	I1124 14:00:42.520206       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1124 14:00:42.620337       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1124 14:00:42.620379       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1124 14:00:42.620350       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1124 14:00:42.622036       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [c060c8b92a797680bc8311ef0a54ce5bacbba9cdfb27356a2c9ebd54d3f1eba9] <==
	I1124 14:00:40.402318       1 serving.go:386] Generated self-signed cert in-memory
	I1124 14:00:41.834873       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1124 14:00:41.834921       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1124 14:00:41.842129       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1124 14:00:41.842238       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1124 14:00:41.846564       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:41.846591       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:41.846966       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:41.846979       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:41.850425       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1124 14:00:41.850994       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1124 14:00:41.947101       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1124 14:00:41.947296       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1124 14:00:41.951589       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	
	
	==> kubelet <==
	Nov 24 14:00:45 embed-certs-456660 kubelet[725]: I1124 14:00:45.789389     725 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdwn9\" (UniqueName: \"kubernetes.io/projected/077f51d0-5205-40db-a330-74520645fac9-kube-api-access-tdwn9\") pod \"dashboard-metrics-scraper-6ffb444bf9-fjr27\" (UID: \"077f51d0-5205-40db-a330-74520645fac9\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27"
	Nov 24 14:00:48 embed-certs-456660 kubelet[725]: I1124 14:00:48.558290     725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 24 14:00:49 embed-certs-456660 kubelet[725]: I1124 14:00:49.885489     725 scope.go:117] "RemoveContainer" containerID="8a14c90ac13f97ff67e3b85602e9eb08bfef67fd48948ffa74bd4d0ca8e7b604"
	Nov 24 14:00:50 embed-certs-456660 kubelet[725]: I1124 14:00:50.890680     725 scope.go:117] "RemoveContainer" containerID="8a14c90ac13f97ff67e3b85602e9eb08bfef67fd48948ffa74bd4d0ca8e7b604"
	Nov 24 14:00:50 embed-certs-456660 kubelet[725]: I1124 14:00:50.890928     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:00:50 embed-certs-456660 kubelet[725]: E1124 14:00:50.892078     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:00:51 embed-certs-456660 kubelet[725]: I1124 14:00:51.895361     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:00:51 embed-certs-456660 kubelet[725]: E1124 14:00:51.895614     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:00:52 embed-certs-456660 kubelet[725]: I1124 14:00:52.898379     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:00:52 embed-certs-456660 kubelet[725]: E1124 14:00:52.899099     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:00:53 embed-certs-456660 kubelet[725]: I1124 14:00:53.911774     725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-dz7pl" podStartSLOduration=1.5581080360000001 podStartE2EDuration="8.911751879s" podCreationTimestamp="2025-11-24 14:00:45 +0000 UTC" firstStartedPulling="2025-11-24 14:00:45.985395373 +0000 UTC m=+7.279857221" lastFinishedPulling="2025-11-24 14:00:53.339039218 +0000 UTC m=+14.633501064" observedRunningTime="2025-11-24 14:00:53.911623802 +0000 UTC m=+15.206085667" watchObservedRunningTime="2025-11-24 14:00:53.911751879 +0000 UTC m=+15.206213745"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: I1124 14:01:07.820272     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: I1124 14:01:07.936923     725 scope.go:117] "RemoveContainer" containerID="99bb803041cdc9efaa18c0f37f850984454d9fc6fc7b091a4557d21f931949f6"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: I1124 14:01:07.937138     725 scope.go:117] "RemoveContainer" containerID="e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	Nov 24 14:01:07 embed-certs-456660 kubelet[725]: E1124 14:01:07.937341     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:01:11 embed-certs-456660 kubelet[725]: I1124 14:01:11.566655     725 scope.go:117] "RemoveContainer" containerID="e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	Nov 24 14:01:11 embed-certs-456660 kubelet[725]: E1124 14:01:11.566875     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:01:12 embed-certs-456660 kubelet[725]: I1124 14:01:12.950469     725 scope.go:117] "RemoveContainer" containerID="874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17"
	Nov 24 14:01:22 embed-certs-456660 kubelet[725]: I1124 14:01:22.820169     725 scope.go:117] "RemoveContainer" containerID="e8aa71d4c87146e95bcfe0c7b254a87aac05ab293ba95da6798abedbd3f78277"
	Nov 24 14:01:22 embed-certs-456660 kubelet[725]: E1124 14:01:22.820377     725 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-fjr27_kubernetes-dashboard(077f51d0-5205-40db-a330-74520645fac9)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-fjr27" podUID="077f51d0-5205-40db-a330-74520645fac9"
	Nov 24 14:01:33 embed-certs-456660 kubelet[725]: I1124 14:01:33.079027     725 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 24 14:01:33 embed-certs-456660 systemd[1]: kubelet.service: Consumed 1.625s CPU time.
	
	
	==> kubernetes-dashboard [4a15b0d874fde66337525492ac9435a3bc9f5a8b35fc018a641eb84f0c7e048f] <==
	2025/11/24 14:00:53 Using namespace: kubernetes-dashboard
	2025/11/24 14:00:53 Using in-cluster config to connect to apiserver
	2025/11/24 14:00:53 Using secret token for csrf signing
	2025/11/24 14:00:53 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/24 14:00:53 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/24 14:00:53 Successful initial request to the apiserver, version: v1.34.1
	2025/11/24 14:00:53 Generating JWE encryption key
	2025/11/24 14:00:53 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/24 14:00:53 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/24 14:00:53 Initializing JWE encryption key from synchronized object
	2025/11/24 14:00:53 Creating in-cluster Sidecar client
	2025/11/24 14:00:53 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:00:53 Serving insecurely on HTTP port: 9090
	2025/11/24 14:01:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/24 14:00:53 Starting overwatch
	
	
	==> storage-provisioner [2fee0c2f6ff9ffd6bc9a2054e39a4cf266c5e67a179ca9f84bedb2135194353d] <==
	I1124 14:01:13.001060       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1124 14:01:13.008099       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1124 14:01:13.008130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1124 14:01:13.009995       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:16.464512       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:20.724728       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:24.325426       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:27.379524       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:30.401217       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:30.414151       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:30.414336       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1124 14:01:30.414463       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-456660_9ac06b7c-1fa1-4774-a265-238670bd5e4a!
	I1124 14:01:30.414453       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"651acb2c-b76c-4715-850b-34431f20fd28", APIVersion:"v1", ResourceVersion:"638", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-456660_9ac06b7c-1fa1-4774-a265-238670bd5e4a became leader
	W1124 14:01:30.417190       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:30.420179       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1124 14:01:30.514763       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-456660_9ac06b7c-1fa1-4774-a265-238670bd5e4a!
	W1124 14:01:32.423323       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:32.428431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:34.431463       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:34.437711       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:36.440206       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:36.492615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:38.495676       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1124 14:01:38.500048       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [874e0893a46264332920443ab04e012d22d78baea09033794f22066fb59e4e17] <==
	I1124 14:00:42.247166       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1124 14:01:12.250257       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-456660 -n embed-certs-456660
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-456660 -n embed-certs-456660: exit status 2 (352.338303ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-456660 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1124 14:01:39.788905  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (7.26s)

                                                
                                    

Test pass (264/328)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.69
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.08
9 TestDownloadOnly/v1.28.0/DeleteAll 0.21
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.98
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.21
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.13
20 TestDownloadOnlyKic 0.39
21 TestBinaryMirror 0.79
22 TestOffline 91.47
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 127.4
31 TestAddons/serial/GCPAuth/Namespaces 0.11
32 TestAddons/serial/GCPAuth/FakeCredentials 8.39
48 TestAddons/StoppedEnableDisable 18.52
49 TestCertOptions 25.91
50 TestCertExpiration 218.47
52 TestForceSystemdFlag 30.72
53 TestForceSystemdEnv 33.99
58 TestErrorSpam/setup 23.07
59 TestErrorSpam/start 0.64
60 TestErrorSpam/status 0.94
61 TestErrorSpam/pause 5.81
62 TestErrorSpam/unpause 5.65
63 TestErrorSpam/stop 12.55
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 36.13
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 6
70 TestFunctional/serial/KubeContext 0.04
71 TestFunctional/serial/KubectlGetPods 0.06
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.74
75 TestFunctional/serial/CacheCmd/cache/add_local 1.13
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.52
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 46.46
84 TestFunctional/serial/ComponentHealth 0.06
85 TestFunctional/serial/LogsCmd 1.13
86 TestFunctional/serial/LogsFileCmd 1.14
87 TestFunctional/serial/InvalidService 4.6
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 8.14
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.18
93 TestFunctional/parallel/StatusCmd 0.93
98 TestFunctional/parallel/AddonsCmd 0.15
99 TestFunctional/parallel/PersistentVolumeClaim 26.61
101 TestFunctional/parallel/SSHCmd 0.82
102 TestFunctional/parallel/CpCmd 1.84
103 TestFunctional/parallel/MySQL 16.06
104 TestFunctional/parallel/FileSync 0.34
105 TestFunctional/parallel/CertSync 1.84
109 TestFunctional/parallel/NodeLabels 0.06
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
113 TestFunctional/parallel/License 0.47
114 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
115 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.14
116 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.15
119 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.55
120 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
122 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 12.3
123 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
129 TestFunctional/parallel/Version/short 0.06
130 TestFunctional/parallel/Version/components 0.52
131 TestFunctional/parallel/ImageCommands/ImageListShort 0.39
132 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
133 TestFunctional/parallel/ImageCommands/ImageListJson 0.22
134 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
135 TestFunctional/parallel/ImageCommands/ImageBuild 2.09
136 TestFunctional/parallel/ImageCommands/Setup 1.16
141 TestFunctional/parallel/ImageCommands/ImageRemove 0.49
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.4
145 TestFunctional/parallel/ProfileCmd/profile_list 0.38
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
147 TestFunctional/parallel/MountCmd/any-port 5.85
148 TestFunctional/parallel/MountCmd/specific-port 1.73
149 TestFunctional/parallel/MountCmd/VerifyCleanup 2.07
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 158.76
163 TestMultiControlPlane/serial/DeployApp 4.15
164 TestMultiControlPlane/serial/PingHostFromPods 1.02
165 TestMultiControlPlane/serial/AddWorkerNode 53.96
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.86
168 TestMultiControlPlane/serial/CopyFile 16.94
169 TestMultiControlPlane/serial/StopSecondaryNode 19.01
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 13.97
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.87
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 108.58
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.49
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.68
176 TestMultiControlPlane/serial/StopCluster 42.09
177 TestMultiControlPlane/serial/RestartCluster 50.92
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 45.11
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.87
185 TestJSONOutput/start/Command 39.63
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.14
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.21
210 TestKicCustomNetwork/create_custom_network 25.56
211 TestKicCustomNetwork/use_default_bridge_network 25.96
212 TestKicExistingNetwork 22.16
213 TestKicCustomSubnet 26.35
214 TestKicStaticIP 27.09
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.67
219 TestMountStart/serial/StartWithMountFirst 4.71
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 7.6
222 TestMountStart/serial/VerifyMountSecond 0.27
223 TestMountStart/serial/DeleteFirst 1.65
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.25
226 TestMountStart/serial/RestartStopped 7.23
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 89.97
231 TestMultiNode/serial/DeployApp2Nodes 3.09
232 TestMultiNode/serial/PingHostFrom2Pods 0.69
233 TestMultiNode/serial/AddNode 23.43
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.64
236 TestMultiNode/serial/CopyFile 9.61
237 TestMultiNode/serial/StopNode 2.22
238 TestMultiNode/serial/StartAfterStop 7.13
239 TestMultiNode/serial/RestartKeepsNodes 82.88
240 TestMultiNode/serial/DeleteNode 5.21
241 TestMultiNode/serial/StopMultiNode 28.49
242 TestMultiNode/serial/RestartMultiNode 24.39
243 TestMultiNode/serial/ValidateNameConflict 22.09
248 TestPreload 101.82
250 TestScheduledStopUnix 96.2
253 TestInsufficientStorage 12.11
254 TestRunningBinaryUpgrade 44.6
256 TestKubernetesUpgrade 303.41
257 TestMissingContainerUpgrade 87.99
259 TestPause/serial/Start 46.32
261 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
262 TestNoKubernetes/serial/StartWithK8s 35.01
263 TestStoppedBinaryUpgrade/Setup 0.68
264 TestStoppedBinaryUpgrade/Upgrade 92.47
265 TestNoKubernetes/serial/StartWithStopK8s 19.56
266 TestPause/serial/SecondStartNoReconfiguration 7.34
268 TestNoKubernetes/serial/Start 4.64
269 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
270 TestNoKubernetes/serial/VerifyK8sNotRunning 0.31
271 TestNoKubernetes/serial/ProfileList 3.14
272 TestNoKubernetes/serial/Stop 1.28
273 TestNoKubernetes/serial/StartNoArgs 9.08
274 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
275 TestStoppedBinaryUpgrade/MinikubeLogs 0.89
290 TestNetworkPlugins/group/false 3.69
292 TestStartStop/group/old-k8s-version/serial/FirstStart 50.88
297 TestStartStop/group/no-preload/serial/FirstStart 50.21
298 TestStartStop/group/old-k8s-version/serial/DeployApp 7.22
299 TestStartStop/group/no-preload/serial/DeployApp 7.21
301 TestStartStop/group/old-k8s-version/serial/Stop 15.98
303 TestStartStop/group/no-preload/serial/Stop 18.09
304 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.19
305 TestStartStop/group/old-k8s-version/serial/SecondStart 51.72
306 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.18
307 TestStartStop/group/no-preload/serial/SecondStart 25.54
308 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 7
310 TestStartStop/group/embed-certs/serial/FirstStart 67.09
311 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
312 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
314 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6
316 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 42.63
317 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
318 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.24
321 TestStartStop/group/newest-cni/serial/FirstStart 28.55
322 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.28
323 TestStartStop/group/newest-cni/serial/DeployApp 0
325 TestStartStop/group/newest-cni/serial/Stop 2.56
326 TestStartStop/group/embed-certs/serial/DeployApp 8.23
327 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
328 TestStartStop/group/newest-cni/serial/SecondStart 11.83
330 TestNetworkPlugins/group/auto/Start 38.79
331 TestStartStop/group/default-k8s-diff-port/serial/Stop 16.47
333 TestStartStop/group/embed-certs/serial/Stop 16.3
334 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
335 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
336 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.26
338 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.21
339 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 45.39
340 TestNetworkPlugins/group/kindnet/Start 45.97
341 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
342 TestStartStop/group/embed-certs/serial/SecondStart 48.94
343 TestNetworkPlugins/group/auto/KubeletFlags 0.35
344 TestNetworkPlugins/group/auto/NetCatPod 11.25
345 TestNetworkPlugins/group/auto/DNS 0.11
346 TestNetworkPlugins/group/auto/Localhost 0.08
347 TestNetworkPlugins/group/auto/HairPin 0.08
348 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
349 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
350 TestNetworkPlugins/group/calico/Start 52.38
351 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.1
352 TestNetworkPlugins/group/kindnet/KubeletFlags 0.3
353 TestNetworkPlugins/group/kindnet/NetCatPod 10.2
354 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.29
357 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.09
358 TestNetworkPlugins/group/kindnet/DNS 0.13
359 TestNetworkPlugins/group/kindnet/Localhost 0.08
360 TestNetworkPlugins/group/kindnet/HairPin 0.09
361 TestNetworkPlugins/group/custom-flannel/Start 50.57
362 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
364 TestNetworkPlugins/group/enable-default-cni/Start 66.12
365 TestNetworkPlugins/group/flannel/Start 56.63
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.32
368 TestNetworkPlugins/group/calico/NetCatPod 8.18
369 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.3
370 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.17
371 TestNetworkPlugins/group/calico/DNS 0.16
372 TestNetworkPlugins/group/calico/Localhost 0.14
373 TestNetworkPlugins/group/calico/HairPin 0.11
374 TestNetworkPlugins/group/custom-flannel/DNS 0.12
375 TestNetworkPlugins/group/custom-flannel/Localhost 0.09
376 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
377 TestNetworkPlugins/group/bridge/Start 68.97
378 TestNetworkPlugins/group/flannel/ControllerPod 6.01
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.37
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 9.19
381 TestNetworkPlugins/group/flannel/KubeletFlags 0.31
382 TestNetworkPlugins/group/flannel/NetCatPod 8.18
383 TestNetworkPlugins/group/enable-default-cni/DNS 0.1
384 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
385 TestNetworkPlugins/group/enable-default-cni/HairPin 0.09
386 TestNetworkPlugins/group/flannel/DNS 0.13
387 TestNetworkPlugins/group/flannel/Localhost 0.12
388 TestNetworkPlugins/group/flannel/HairPin 0.11
389 TestNetworkPlugins/group/bridge/KubeletFlags 0.28
390 TestNetworkPlugins/group/bridge/NetCatPod 9.16
391 TestNetworkPlugins/group/bridge/DNS 0.1
392 TestNetworkPlugins/group/bridge/Localhost 0.08
393 TestNetworkPlugins/group/bridge/HairPin 0.08
x
+
TestDownloadOnly/v1.28.0/json-events (4.69s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-053089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-053089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.694600037s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.69s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1124 13:13:37.471190  351593 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1124 13:13:37.471289  351593 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-053089
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-053089: exit status 85 (74.650053ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-053089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-053089 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:32
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:32.828931  351605 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:32.829149  351605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:32.829157  351605 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:32.829162  351605 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:32.829352  351605 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	W1124 13:13:32.829462  351605 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21932-348000/.minikube/config/config.json: open /home/jenkins/minikube-integration/21932-348000/.minikube/config/config.json: no such file or directory
	I1124 13:13:32.829969  351605 out.go:368] Setting JSON to true
	I1124 13:13:32.830865  351605 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6960,"bootTime":1763983053,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:13:32.830928  351605 start.go:143] virtualization: kvm guest
	I1124 13:13:32.835170  351605 out.go:99] [download-only-053089] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	W1124 13:13:32.835335  351605 preload.go:354] Failed to list preload files: open /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball: no such file or directory
	I1124 13:13:32.835359  351605 notify.go:221] Checking for updates...
	I1124 13:13:32.836363  351605 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:13:32.837625  351605 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:32.838772  351605 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:13:32.839881  351605 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:13:32.841073  351605 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 13:13:32.843128  351605 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:13:32.843385  351605 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:32.865042  351605 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:13:32.865161  351605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:32.920620  351605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 13:13:32.910759915 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:32.920730  351605 docker.go:319] overlay module found
	I1124 13:13:32.922104  351605 out.go:99] Using the docker driver based on user configuration
	I1124 13:13:32.922140  351605 start.go:309] selected driver: docker
	I1124 13:13:32.922150  351605 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:32.922273  351605 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:32.973943  351605 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-24 13:13:32.965128401 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:32.974140  351605 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:32.974652  351605 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 13:13:32.974837  351605 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:13:32.976324  351605 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-053089 host does not exist
	  To start a cluster, run: "minikube start -p download-only-053089"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-053089
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.98s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-176855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-176855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.979227558s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.98s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1124 13:13:41.876786  351593 preload.go:188] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1124 13:13:41.876830  351593 preload.go:203] Found local preload: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-176855
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-176855: exit status 85 (69.774307ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-053089 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-053089 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ delete  │ -p download-only-053089                                                                                                                                                   │ download-only-053089 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │ 24 Nov 25 13:13 UTC │
	│ start   │ -o=json --download-only -p download-only-176855 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-176855 │ jenkins │ v1.37.0 │ 24 Nov 25 13:13 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/24 13:13:37
	Running on machine: ubuntu-20-agent-15
	Binary: Built with gc go1.25.3 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1124 13:13:37.948210  351960 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:13:37.948312  351960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:37.948320  351960 out.go:374] Setting ErrFile to fd 2...
	I1124 13:13:37.948324  351960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:13:37.948489  351960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:13:37.948910  351960 out.go:368] Setting JSON to true
	I1124 13:13:37.949761  351960 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":6965,"bootTime":1763983053,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:13:37.949811  351960 start.go:143] virtualization: kvm guest
	I1124 13:13:37.951501  351960 out.go:99] [download-only-176855] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:13:37.951643  351960 notify.go:221] Checking for updates...
	I1124 13:13:37.953095  351960 out.go:171] MINIKUBE_LOCATION=21932
	I1124 13:13:37.954527  351960 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:13:37.955737  351960 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:13:37.956907  351960 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:13:37.958057  351960 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1124 13:13:37.959995  351960 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1124 13:13:37.960236  351960 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:13:37.981609  351960 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:13:37.981715  351960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:38.033601  351960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-24 13:13:38.024307333 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:38.033705  351960 docker.go:319] overlay module found
	I1124 13:13:38.035158  351960 out.go:99] Using the docker driver based on user configuration
	I1124 13:13:38.035189  351960 start.go:309] selected driver: docker
	I1124 13:13:38.035197  351960 start.go:927] validating driver "docker" against <nil>
	I1124 13:13:38.035286  351960 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:13:38.090066  351960 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-24 13:13:38.081258859 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:13:38.090238  351960 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1124 13:13:38.090763  351960 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1124 13:13:38.090945  351960 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1124 13:13:38.092498  351960 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-176855 host does not exist
	  To start a cluster, run: "minikube start -p download-only-176855"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-176855
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.39s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-849908 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-849908" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-849908
--- PASS: TestDownloadOnlyKic (0.39s)

                                                
                                    
x
+
TestBinaryMirror (0.79s)

                                                
                                                
=== RUN   TestBinaryMirror
I1124 13:13:42.948074  351593 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-545470 --alsologtostderr --binary-mirror http://127.0.0.1:44271 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-545470" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-545470
--- PASS: TestBinaryMirror (0.79s)

                                                
                                    
x
+
TestOffline (91.47s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-669749 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-669749 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m25.291542953s)
helpers_test.go:175: Cleaning up "offline-crio-669749" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-669749
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-669749: (6.177582966s)
--- PASS: TestOffline (91.47s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-715644
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-715644: exit status 85 (62.349319ms)

                                                
                                                
-- stdout --
	* Profile "addons-715644" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-715644"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-715644
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-715644: exit status 85 (62.387599ms)

                                                
                                                
-- stdout --
	* Profile "addons-715644" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-715644"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (127.4s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-715644 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-715644 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m7.393573s)
--- PASS: TestAddons/Setup (127.40s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-715644 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-715644 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.11s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.39s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-715644 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-715644 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [9962db4c-07c4-44a4-9f16-ae616a655918] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [9962db4c-07c4-44a4-9f16-ae616a655918] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.002754724s
addons_test.go:694: (dbg) Run:  kubectl --context addons-715644 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-715644 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-715644 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.39s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (18.52s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-715644
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-715644: (18.247632188s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-715644
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-715644
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-715644
--- PASS: TestAddons/StoppedEnableDisable (18.52s)

                                                
                                    
x
+
TestCertOptions (25.91s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-213186 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-213186 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (22.83875373s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-213186 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-213186 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-213186 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-213186" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-213186
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-213186: (2.39009798s)
--- PASS: TestCertOptions (25.91s)

                                                
                                    
x
+
TestCertExpiration (218.47s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-107341 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-107341 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (27.532173897s)
E1124 13:55:51.723102  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-107341 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (7.969976959s)
helpers_test.go:175: Cleaning up "cert-expiration-107341" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-107341
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-107341: (2.969484912s)
--- PASS: TestCertExpiration (218.47s)

                                                
                                    
x
+
TestForceSystemdFlag (30.72s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-045398 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-045398 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (28.071861482s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-045398 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-045398" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-045398
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-045398: (2.366959158s)
--- PASS: TestForceSystemdFlag (30.72s)

                                                
                                    
x
+
TestForceSystemdEnv (33.99s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-699216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-699216 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (31.438505256s)
helpers_test.go:175: Cleaning up "force-systemd-env-699216" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-699216
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-699216: (2.547511548s)
--- PASS: TestForceSystemdEnv (33.99s)

                                                
                                    
x
+
TestErrorSpam/setup (23.07s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-678760 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-678760 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-678760 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-678760 --driver=docker  --container-runtime=crio: (23.069478982s)
--- PASS: TestErrorSpam/setup (23.07s)

                                                
                                    
x
+
TestErrorSpam/start (0.64s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 start --dry-run
--- PASS: TestErrorSpam/start (0.64s)

                                                
                                    
x
+
TestErrorSpam/status (0.94s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 status
--- PASS: TestErrorSpam/status (0.94s)

                                                
                                    
x
+
TestErrorSpam/pause (5.81s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause: exit status 80 (2.026769426s)

                                                
                                                
-- stdout --
	* Pausing node nospam-678760 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:19:30Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause: exit status 80 (2.34500047s)

                                                
                                                
-- stdout --
	* Pausing node nospam-678760 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:19:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause: exit status 80 (1.439453416s)

                                                
                                                
-- stdout --
	* Pausing node nospam-678760 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:19:34Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (5.81s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.65s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause: exit status 80 (2.204268174s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-678760 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:19:36Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause: exit status 80 (1.632717468s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-678760 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:19:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause: exit status 80 (1.81089813s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-678760 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-24T13:19:39Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.65s)

                                                
                                    
x
+
TestErrorSpam/stop (12.55s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 stop: (12.351954528s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-678760 --log_dir /tmp/nospam-678760 stop
--- PASS: TestErrorSpam/stop (12.55s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21932-348000/.minikube/files/etc/test/nested/copy/351593/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (36.13s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-334592 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-334592 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (36.125182331s)
--- PASS: TestFunctional/serial/StartWithProxy (36.13s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (6s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1124 13:20:33.483681  351593 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-334592 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-334592 --alsologtostderr -v=8: (6.003256456s)
functional_test.go:678: soft start took 6.004387176s for "functional-334592" cluster.
I1124 13:20:39.487709  351593 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (6.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-334592 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.74s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-334592 /tmp/TestFunctionalserialCacheCmdcacheadd_local2725129541/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cache add minikube-local-cache-test:functional-334592
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cache delete minikube-local-cache-test:functional-334592
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-334592
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (282.141435ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.52s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 kubectl -- --context functional-334592 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-334592 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (46.46s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-334592 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1124 13:20:51.724781  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:51.731145  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:51.742490  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:51.763818  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:51.805135  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:51.886489  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:52.047949  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:52.369561  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:53.011109  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:54.292719  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:20:56.855630  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:01.977431  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:21:12.219048  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-334592 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (46.460535537s)
functional_test.go:776: restart took 46.460678319s for "functional-334592" cluster.
I1124 13:21:32.192830  351593 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (46.46s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-334592 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.06s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 logs
E1124 13:21:32.700984  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-334592 logs: (1.131845674s)
--- PASS: TestFunctional/serial/LogsCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 logs --file /tmp/TestFunctionalserialLogsFileCmd2422585898/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-334592 logs --file /tmp/TestFunctionalserialLogsFileCmd2422585898/001/logs.txt: (1.14027934s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.14s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-334592 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-334592
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-334592: exit status 115 (334.496399ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:30327 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-334592 delete -f testdata/invalidsvc.yaml
functional_test.go:2332: (dbg) Done: kubectl --context functional-334592 delete -f testdata/invalidsvc.yaml: (1.105286034s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 config get cpus: exit status 14 (85.957986ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 config get cpus: exit status 14 (81.983088ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (8.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-334592 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-334592 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 389920: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (8.14s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-334592 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-334592 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (157.054058ms)

                                                
                                                
-- stdout --
	* [functional-334592] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:22:08.452169  388926 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:22:08.452261  388926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:08.452265  388926 out.go:374] Setting ErrFile to fd 2...
	I1124 13:22:08.452269  388926 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:08.452450  388926 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:22:08.452846  388926 out.go:368] Setting JSON to false
	I1124 13:22:08.453836  388926 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7475,"bootTime":1763983053,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:22:08.453907  388926 start.go:143] virtualization: kvm guest
	I1124 13:22:08.455955  388926 out.go:179] * [functional-334592] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:22:08.457068  388926 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:22:08.457053  388926 notify.go:221] Checking for updates...
	I1124 13:22:08.459290  388926 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:22:08.460800  388926 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:22:08.461902  388926 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:22:08.463076  388926 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:22:08.464233  388926 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:22:08.465785  388926 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:22:08.466332  388926 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:22:08.489090  388926 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:22:08.489228  388926 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:22:08.541007  388926 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 13:22:08.532446611 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:22:08.541114  388926 docker.go:319] overlay module found
	I1124 13:22:08.543186  388926 out.go:179] * Using the docker driver based on existing profile
	I1124 13:22:08.544161  388926 start.go:309] selected driver: docker
	I1124 13:22:08.544176  388926 start.go:927] validating driver "docker" against &{Name:functional-334592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-334592 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:22:08.544251  388926 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:22:08.545699  388926 out.go:203] 
	W1124 13:22:08.546733  388926 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1124 13:22:08.549422  388926 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-334592 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-334592 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-334592 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (175.205934ms)

                                                
                                                
-- stdout --
	* [functional-334592] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:22:08.839376  389165 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:22:08.839652  389165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:08.839663  389165 out.go:374] Setting ErrFile to fd 2...
	I1124 13:22:08.839669  389165 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:22:08.840012  389165 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:22:08.840458  389165 out.go:368] Setting JSON to false
	I1124 13:22:08.841503  389165 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":7476,"bootTime":1763983053,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:22:08.841566  389165 start.go:143] virtualization: kvm guest
	I1124 13:22:08.843884  389165 out.go:179] * [functional-334592] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1124 13:22:08.845228  389165 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:22:08.845243  389165 notify.go:221] Checking for updates...
	I1124 13:22:08.847338  389165 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:22:08.848441  389165 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:22:08.849485  389165 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:22:08.854092  389165 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:22:08.855386  389165 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:22:08.856730  389165 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:22:08.857310  389165 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:22:08.880640  389165 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:22:08.880782  389165 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:22:08.936388  389165 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-24 13:22:08.927594092 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:22:08.936504  389165 docker.go:319] overlay module found
	I1124 13:22:08.938143  389165 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1124 13:22:08.939144  389165 start.go:309] selected driver: docker
	I1124 13:22:08.939162  389165 start.go:927] validating driver "docker" against &{Name:functional-334592 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1763789673-21948@sha256:bb10ebd3ca086eea12c038085866fb2f6cfa67385dcb830c4deb5e36ced6b53f Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-334592 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1124 13:22:08.939271  389165 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:22:08.940961  389165 out.go:203] 
	W1124 13:22:08.942055  389165 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1124 13:22:08.943189  389165 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.93s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4fa10669-d9fd-4899-b4e5-a104722fc39e] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.002011928s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-334592 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-334592 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-334592 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-334592 apply -f testdata/storage-provisioner/pod.yaml
I1124 13:21:48.130157  351593 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [61925cb9-d8f8-4920-8585-ceeeaf49891c] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [61925cb9-d8f8-4920-8585-ceeeaf49891c] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 11.003030412s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-334592 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-334592 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-334592 apply -f testdata/storage-provisioner/pod.yaml
I1124 13:22:00.314929  351593 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [b023c8d5-0180-4c83-b801-a144d43fd1e2] Pending
helpers_test.go:352: "sp-pod" [b023c8d5-0180-4c83-b801-a144d43fd1e2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [b023c8d5-0180-4c83-b801-a144d43fd1e2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002511352s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-334592 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh -n functional-334592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cp functional-334592:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd2291886447/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh -n functional-334592 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh -n functional-334592 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (16.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-334592 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-c2kmw" [7ef7a645-f588-42bc-bb15-c1d05d6940ea] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:352: "mysql-5bb876957f-c2kmw" [7ef7a645-f588-42bc-bb15-c1d05d6940ea] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 13.003274712s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-334592 exec mysql-5bb876957f-c2kmw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-334592 exec mysql-5bb876957f-c2kmw -- mysql -ppassword -e "show databases;": exit status 1 (87.216664ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 13:21:52.879434  351593 retry.go:31] will retry after 862.868995ms: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-334592 exec mysql-5bb876957f-c2kmw -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-334592 exec mysql-5bb876957f-c2kmw -- mysql -ppassword -e "show databases;": exit status 1 (85.19802ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1124 13:21:53.828203  351593 retry.go:31] will retry after 1.747813928s: exit status 1
I1124 13:21:53.828397  351593 kapi.go:150] Service nginx-svc in namespace default found.
functional_test.go:1812: (dbg) Run:  kubectl --context functional-334592 exec mysql-5bb876957f-c2kmw -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (16.06s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/351593/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /etc/test/nested/copy/351593/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.84s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/351593.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /etc/ssl/certs/351593.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/351593.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /usr/share/ca-certificates/351593.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3515932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /etc/ssl/certs/3515932.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/3515932.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /usr/share/ca-certificates/3515932.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.84s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-334592 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh "sudo systemctl is-active docker": exit status 1 (272.366477ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh "sudo systemctl is-active containerd": exit status 1 (273.938227ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 update-context --alsologtostderr -v=2
E1124 13:22:13.663288  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-334592 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-334592 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-334592 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-334592 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 384671: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-334592 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-334592 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [9f8e960a-24ee-4cac-b564-ac8a653b12af] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [9f8e960a-24ee-4cac-b564-ac8a653b12af] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 12.002966693s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (12.30s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-334592 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.62.181 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-334592 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-334592 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.1
registry.k8s.io/kube-proxy:v1.34.1
registry.k8s.io/kube-controller-manager:v1.34.1
registry.k8s.io/kube-apiserver:v1.34.1
registry.k8s.io/etcd:3.6.4-0
registry.k8s.io/coredns/coredns:v1.12.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20250512-df8de77b
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-334592 image ls --format short --alsologtostderr:
I1124 13:22:14.545789  391539 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:14.546062  391539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:14.546072  391539 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:14.546075  391539 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:14.546334  391539 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
I1124 13:22:14.546877  391539 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:14.547003  391539 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:14.547483  391539 cli_runner.go:164] Run: docker container inspect functional-334592 --format={{.State.Status}}
I1124 13:22:14.563977  391539 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:14.564018  391539 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-334592
I1124 13:22:14.580762  391539 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/functional-334592/id_rsa Username:docker}
I1124 13:22:14.685070  391539 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-334592 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ 60adc2e137e75 │ 155MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-334592 image ls --format table --alsologtostderr:
I1124 13:22:15.389822  391748 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:15.389978  391748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:15.389988  391748 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:15.389993  391748 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:15.390195  391748 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
I1124 13:22:15.390720  391748 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:15.390847  391748 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:15.391281  391748 cli_runner.go:164] Run: docker container inspect functional-334592 --format={{.State.Status}}
I1124 13:22:15.408433  391748 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:15.408472  391748 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-334592
I1124 13:22:15.424786  391748 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/functional-334592/id_rsa Username:docker}
I1124 13:22:15.523247  391748 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-334592 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fb
e50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDiges
ts":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe
7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"115053965e86b2df4
d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"
id":"60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5","repoDigests":["docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42","docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541"],"repoTags":["docker.io/library/nginx:latest"],"size":"155491845"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause
:3.10.1"],"size":"742092"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-334592 image ls --format json --alsologtostderr:
I1124 13:22:15.169314  391693 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:15.169423  391693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:15.169433  391693 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:15.169439  391693 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:15.169604  391693 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
I1124 13:22:15.170169  391693 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:15.170290  391693 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:15.170725  391693 cli_runner.go:164] Run: docker container inspect functional-334592 --format={{.State.Status}}
I1124 13:22:15.187960  391693 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:15.188004  391693 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-334592
I1124 13:22:15.203785  391693 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/functional-334592/id_rsa Username:docker}
I1124 13:22:15.303313  391693 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-334592 image ls --format yaml --alsologtostderr:
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 60adc2e137e757418d4d771822fa3b3f5d3b4ad58ef2385d200c9ee78375b6d5
repoDigests:
- docker.io/library/nginx@sha256:553f64aecdc31b5bf944521731cd70e35da4faed96b2b7548a3d8e2598c52a42
- docker.io/library/nginx@sha256:5c733364e9a8f7e6d7289ceaad623c6600479fe95c3ab5534f07bfd7416d9541
repoTags:
- docker.io/library/nginx:latest
size: "155491845"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-334592 image ls --format yaml --alsologtostderr:
I1124 13:22:14.943776  391628 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:14.943879  391628 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:14.943886  391628 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:14.943903  391628 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:14.944108  391628 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
I1124 13:22:14.944873  391628 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:14.945047  391628 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:14.945848  391628 cli_runner.go:164] Run: docker container inspect functional-334592 --format={{.State.Status}}
I1124 13:22:14.965697  391628 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:14.965763  391628 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-334592
I1124 13:22:14.982522  391628 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/functional-334592/id_rsa Username:docker}
I1124 13:22:15.081193  391628 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh pgrep buildkitd: exit status 1 (260.94168ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image build -t localhost/my-image:functional-334592 testdata/build --alsologtostderr
2025/11/24 13:22:16 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-334592 image build -t localhost/my-image:functional-334592 testdata/build --alsologtostderr: (1.609648876s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-334592 image build -t localhost/my-image:functional-334592 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> d1f00fc3b7e
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-334592
--> 1fa308f0140
Successfully tagged localhost/my-image:functional-334592
1fa308f014045b985597c17192964fa8df59952c68c16598c6b1e216372bd972
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-334592 image build -t localhost/my-image:functional-334592 testdata/build --alsologtostderr:
I1124 13:22:15.871861  391908 out.go:360] Setting OutFile to fd 1 ...
I1124 13:22:15.872001  391908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:15.872012  391908 out.go:374] Setting ErrFile to fd 2...
I1124 13:22:15.872024  391908 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1124 13:22:15.872235  391908 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
I1124 13:22:15.872771  391908 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:15.873379  391908 config.go:182] Loaded profile config "functional-334592": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1124 13:22:15.873834  391908 cli_runner.go:164] Run: docker container inspect functional-334592 --format={{.State.Status}}
I1124 13:22:15.891794  391908 ssh_runner.go:195] Run: systemctl --version
I1124 13:22:15.891837  391908 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-334592
I1124 13:22:15.908745  391908 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33153 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/functional-334592/id_rsa Username:docker}
I1124 13:22:16.008137  391908 build_images.go:162] Building image from path: /tmp/build.623825006.tar
I1124 13:22:16.008220  391908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1124 13:22:16.016122  391908 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.623825006.tar
I1124 13:22:16.019659  391908 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.623825006.tar: stat -c "%s %y" /var/lib/minikube/build/build.623825006.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.623825006.tar': No such file or directory
I1124 13:22:16.019679  391908 ssh_runner.go:362] scp /tmp/build.623825006.tar --> /var/lib/minikube/build/build.623825006.tar (3072 bytes)
I1124 13:22:16.036755  391908 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.623825006
I1124 13:22:16.043822  391908 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.623825006 -xf /var/lib/minikube/build/build.623825006.tar
I1124 13:22:16.051333  391908 crio.go:315] Building image: /var/lib/minikube/build/build.623825006
I1124 13:22:16.051406  391908 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-334592 /var/lib/minikube/build/build.623825006 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1124 13:22:17.404917  391908 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-334592 /var/lib/minikube/build/build.623825006 --cgroup-manager=cgroupfs: (1.353461488s)
I1124 13:22:17.404989  391908 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.623825006
I1124 13:22:17.412989  391908 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.623825006.tar
I1124 13:22:17.420205  391908 build_images.go:218] Built localhost/my-image:functional-334592 from /tmp/build.623825006.tar
I1124 13:22:17.420230  391908 build_images.go:134] succeeded building to: functional-334592
I1124 13:22:17.420234  391908 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls
E1124 13:23:35.585592  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:25:51.723517  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:26:19.427576  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:30:51.723130  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.09s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.144864179s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-334592
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image rm kicbase/echo-server:functional-334592 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "323.220922ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "60.087406ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "326.184031ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.210504ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (5.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdany-port802503855/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1763990523820768742" to /tmp/TestFunctionalparallelMountCmdany-port802503855/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1763990523820768742" to /tmp/TestFunctionalparallelMountCmdany-port802503855/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1763990523820768742" to /tmp/TestFunctionalparallelMountCmdany-port802503855/001/test-1763990523820768742
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (281.738554ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:22:04.102811  351593 retry.go:31] will retry after 615.437086ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov 24 13:22 created-by-test
-rw-r--r-- 1 docker docker 24 Nov 24 13:22 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov 24 13:22 test-1763990523820768742
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh cat /mount-9p/test-1763990523820768742
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-334592 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [0d628332-f66f-4234-a791-bf68045bc15e] Pending
helpers_test.go:352: "busybox-mount" [0d628332-f66f-4234-a791-bf68045bc15e] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [0d628332-f66f-4234-a791-bf68045bc15e] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [0d628332-f66f-4234-a791-bf68045bc15e] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 3.00399868s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-334592 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdany-port802503855/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (5.85s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdspecific-port3035141573/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.305681ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:22:09.999039  351593 retry.go:31] will retry after 339.56336ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdspecific-port3035141573/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh "sudo umount -f /mount-9p": exit status 1 (308.488888ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-334592 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdspecific-port3035141573/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T" /mount1: exit status 1 (392.210258ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1124 13:22:11.796432  351593 retry.go:31] will retry after 623.768756ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-334592 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-334592 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3656109573/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.07s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-334592 service list: (1.693518517s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-334592 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-334592 service list -o json: (1.695281925s)
functional_test.go:1504: Took "1.695376676s" to run "out/minikube-linux-amd64 -p functional-334592 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-334592
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-334592
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-334592
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (158.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m38.043928239s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (158.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 kubectl -- rollout status deployment/busybox: (2.250873722s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-hq9zj -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-rt5zq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-t2tpt -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-hq9zj -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-rt5zq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-t2tpt -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-hq9zj -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-rt5zq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-t2tpt -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-hq9zj -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-hq9zj -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-rt5zq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-rt5zq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-t2tpt -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 kubectl -- exec busybox-7b57f96db7-t2tpt -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (53.96s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 node add --alsologtostderr -v 5: (53.106761563s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (53.96s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-958431 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp testdata/cp-test.txt ha-958431:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1816707863/001/cp-test_ha-958431.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431:/home/docker/cp-test.txt ha-958431-m02:/home/docker/cp-test_ha-958431_ha-958431-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test_ha-958431_ha-958431-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431:/home/docker/cp-test.txt ha-958431-m03:/home/docker/cp-test_ha-958431_ha-958431-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test_ha-958431_ha-958431-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431:/home/docker/cp-test.txt ha-958431-m04:/home/docker/cp-test_ha-958431_ha-958431-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test_ha-958431_ha-958431-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp testdata/cp-test.txt ha-958431-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1816707863/001/cp-test_ha-958431-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m02:/home/docker/cp-test.txt ha-958431:/home/docker/cp-test_ha-958431-m02_ha-958431.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test_ha-958431-m02_ha-958431.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m02:/home/docker/cp-test.txt ha-958431-m03:/home/docker/cp-test_ha-958431-m02_ha-958431-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test_ha-958431-m02_ha-958431-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m02:/home/docker/cp-test.txt ha-958431-m04:/home/docker/cp-test_ha-958431-m02_ha-958431-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test_ha-958431-m02_ha-958431-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp testdata/cp-test.txt ha-958431-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1816707863/001/cp-test_ha-958431-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m03:/home/docker/cp-test.txt ha-958431:/home/docker/cp-test_ha-958431-m03_ha-958431.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test_ha-958431-m03_ha-958431.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m03:/home/docker/cp-test.txt ha-958431-m02:/home/docker/cp-test_ha-958431-m03_ha-958431-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test_ha-958431-m03_ha-958431-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m03:/home/docker/cp-test.txt ha-958431-m04:/home/docker/cp-test_ha-958431-m03_ha-958431-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test_ha-958431-m03_ha-958431-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp testdata/cp-test.txt ha-958431-m04:/home/docker/cp-test.txt
E1124 13:35:51.722736  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1816707863/001/cp-test_ha-958431-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m04:/home/docker/cp-test.txt ha-958431:/home/docker/cp-test_ha-958431-m04_ha-958431.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431 "sudo cat /home/docker/cp-test_ha-958431-m04_ha-958431.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m04:/home/docker/cp-test.txt ha-958431-m02:/home/docker/cp-test_ha-958431-m04_ha-958431-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m02 "sudo cat /home/docker/cp-test_ha-958431-m04_ha-958431-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 cp ha-958431-m04:/home/docker/cp-test.txt ha-958431-m03:/home/docker/cp-test_ha-958431-m04_ha-958431-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 ssh -n ha-958431-m03 "sudo cat /home/docker/cp-test_ha-958431-m04_ha-958431-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node stop m02 --alsologtostderr -v 5
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 node stop m02 --alsologtostderr -v 5: (18.332008352s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5: exit status 7 (678.680379ms)

                                                
                                                
-- stdout --
	ha-958431
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-958431-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-958431-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-958431-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:36:14.089307  416441 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:36:14.089576  416441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:36:14.089585  416441 out.go:374] Setting ErrFile to fd 2...
	I1124 13:36:14.089588  416441 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:36:14.089840  416441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:36:14.090019  416441 out.go:368] Setting JSON to false
	I1124 13:36:14.090044  416441 mustload.go:66] Loading cluster: ha-958431
	I1124 13:36:14.090185  416441 notify.go:221] Checking for updates...
	I1124 13:36:14.090417  416441 config.go:182] Loaded profile config "ha-958431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:36:14.090436  416441 status.go:174] checking status of ha-958431 ...
	I1124 13:36:14.091488  416441 cli_runner.go:164] Run: docker container inspect ha-958431 --format={{.State.Status}}
	I1124 13:36:14.109920  416441 status.go:371] ha-958431 host status = "Running" (err=<nil>)
	I1124 13:36:14.109946  416441 host.go:66] Checking if "ha-958431" exists ...
	I1124 13:36:14.110262  416441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-958431
	I1124 13:36:14.128370  416441 host.go:66] Checking if "ha-958431" exists ...
	I1124 13:36:14.128617  416441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:36:14.128657  416441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-958431
	I1124 13:36:14.145418  416441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33158 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/ha-958431/id_rsa Username:docker}
	I1124 13:36:14.243729  416441 ssh_runner.go:195] Run: systemctl --version
	I1124 13:36:14.249944  416441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:36:14.261831  416441 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:36:14.317189  416441 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:70 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:36:14.306794413 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:36:14.317971  416441 kubeconfig.go:125] found "ha-958431" server: "https://192.168.49.254:8443"
	I1124 13:36:14.318010  416441 api_server.go:166] Checking apiserver status ...
	I1124 13:36:14.318055  416441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:36:14.329585  416441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup
	W1124 13:36:14.337478  416441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1239/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:36:14.337539  416441 ssh_runner.go:195] Run: ls
	I1124 13:36:14.341274  416441 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:36:14.345340  416441 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:36:14.345371  416441 status.go:463] ha-958431 apiserver status = Running (err=<nil>)
	I1124 13:36:14.345394  416441 status.go:176] ha-958431 status: &{Name:ha-958431 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:36:14.345413  416441 status.go:174] checking status of ha-958431-m02 ...
	I1124 13:36:14.345723  416441 cli_runner.go:164] Run: docker container inspect ha-958431-m02 --format={{.State.Status}}
	I1124 13:36:14.361872  416441 status.go:371] ha-958431-m02 host status = "Stopped" (err=<nil>)
	I1124 13:36:14.361898  416441 status.go:384] host is not running, skipping remaining checks
	I1124 13:36:14.361907  416441 status.go:176] ha-958431-m02 status: &{Name:ha-958431-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:36:14.361927  416441 status.go:174] checking status of ha-958431-m03 ...
	I1124 13:36:14.362166  416441 cli_runner.go:164] Run: docker container inspect ha-958431-m03 --format={{.State.Status}}
	I1124 13:36:14.378263  416441 status.go:371] ha-958431-m03 host status = "Running" (err=<nil>)
	I1124 13:36:14.378282  416441 host.go:66] Checking if "ha-958431-m03" exists ...
	I1124 13:36:14.378510  416441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-958431-m03
	I1124 13:36:14.394428  416441 host.go:66] Checking if "ha-958431-m03" exists ...
	I1124 13:36:14.394693  416441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:36:14.394731  416441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-958431-m03
	I1124 13:36:14.410685  416441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33168 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/ha-958431-m03/id_rsa Username:docker}
	I1124 13:36:14.507587  416441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:36:14.520131  416441 kubeconfig.go:125] found "ha-958431" server: "https://192.168.49.254:8443"
	I1124 13:36:14.520155  416441 api_server.go:166] Checking apiserver status ...
	I1124 13:36:14.520183  416441 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:36:14.530388  416441 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup
	W1124 13:36:14.538002  416441 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1186/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:36:14.538038  416441 ssh_runner.go:195] Run: ls
	I1124 13:36:14.541386  416441 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1124 13:36:14.545347  416441 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1124 13:36:14.545370  416441 status.go:463] ha-958431-m03 apiserver status = Running (err=<nil>)
	I1124 13:36:14.545382  416441 status.go:176] ha-958431-m03 status: &{Name:ha-958431-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:36:14.545403  416441 status.go:174] checking status of ha-958431-m04 ...
	I1124 13:36:14.545668  416441 cli_runner.go:164] Run: docker container inspect ha-958431-m04 --format={{.State.Status}}
	I1124 13:36:14.562446  416441 status.go:371] ha-958431-m04 host status = "Running" (err=<nil>)
	I1124 13:36:14.562462  416441 host.go:66] Checking if "ha-958431-m04" exists ...
	I1124 13:36:14.562675  416441 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-958431-m04
	I1124 13:36:14.580397  416441 host.go:66] Checking if "ha-958431-m04" exists ...
	I1124 13:36:14.580632  416441 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:36:14.580665  416441 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-958431-m04
	I1124 13:36:14.596695  416441 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33173 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/ha-958431-m04/id_rsa Username:docker}
	I1124 13:36:14.694558  416441 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:36:14.706496  416441 status.go:176] ha-958431-m04 status: &{Name:ha-958431-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (13.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 node start m02 --alsologtostderr -v 5: (13.063084419s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (13.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.58s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 stop --alsologtostderr -v 5
E1124 13:36:39.794199  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:39.800553  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:39.811899  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:39.833241  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:39.874585  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:39.956061  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:40.118287  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:40.440013  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:41.082037  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:42.363726  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:44.926653  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:36:50.048146  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:37:00.289702  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:37:14.791847  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 stop --alsologtostderr -v 5: (49.697980191s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 start --wait true --alsologtostderr -v 5
E1124 13:37:20.773763  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:38:01.735316  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 start --wait true --alsologtostderr -v 5: (58.754336579s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (108.58s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 node delete m03 --alsologtostderr -v 5: (9.670960668s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (42.09s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 stop --alsologtostderr -v 5: (41.983004989s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5: exit status 7 (111.277955ms)

                                                
                                                
-- stdout --
	ha-958431
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-958431-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-958431-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:39:12.045267  430770 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:39:12.045508  430770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:39:12.045517  430770 out.go:374] Setting ErrFile to fd 2...
	I1124 13:39:12.045521  430770 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:39:12.045709  430770 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:39:12.045851  430770 out.go:368] Setting JSON to false
	I1124 13:39:12.045878  430770 mustload.go:66] Loading cluster: ha-958431
	I1124 13:39:12.046001  430770 notify.go:221] Checking for updates...
	I1124 13:39:12.046235  430770 config.go:182] Loaded profile config "ha-958431": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:39:12.046254  430770 status.go:174] checking status of ha-958431 ...
	I1124 13:39:12.046694  430770 cli_runner.go:164] Run: docker container inspect ha-958431 --format={{.State.Status}}
	I1124 13:39:12.064176  430770 status.go:371] ha-958431 host status = "Stopped" (err=<nil>)
	I1124 13:39:12.064191  430770 status.go:384] host is not running, skipping remaining checks
	I1124 13:39:12.064197  430770 status.go:176] ha-958431 status: &{Name:ha-958431 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:39:12.064231  430770 status.go:174] checking status of ha-958431-m02 ...
	I1124 13:39:12.064515  430770 cli_runner.go:164] Run: docker container inspect ha-958431-m02 --format={{.State.Status}}
	I1124 13:39:12.081536  430770 status.go:371] ha-958431-m02 host status = "Stopped" (err=<nil>)
	I1124 13:39:12.081570  430770 status.go:384] host is not running, skipping remaining checks
	I1124 13:39:12.081581  430770 status.go:176] ha-958431-m02 status: &{Name:ha-958431-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:39:12.081607  430770 status.go:174] checking status of ha-958431-m04 ...
	I1124 13:39:12.081848  430770 cli_runner.go:164] Run: docker container inspect ha-958431-m04 --format={{.State.Status}}
	I1124 13:39:12.097289  430770 status.go:371] ha-958431-m04 host status = "Stopped" (err=<nil>)
	I1124 13:39:12.097305  430770 status.go:384] host is not running, skipping remaining checks
	I1124 13:39:12.097311  430770 status.go:176] ha-958431-m04 status: &{Name:ha-958431-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (42.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (50.92s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1124 13:39:23.658090  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (50.134692943s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (50.92s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (45.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-958431 node add --control-plane --alsologtostderr -v 5: (44.251957428s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-958431 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (45.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.87s)

                                                
                                    
x
+
TestJSONOutput/start/Command (39.63s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-833174 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-833174 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (39.6259085s)
--- PASS: TestJSONOutput/start/Command (39.63s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.14s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-833174 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-833174 --output=json --user=testUser: (6.140190123s)
--- PASS: TestJSONOutput/stop/Command (6.14s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-453160 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-453160 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (73.824796ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"bb80211c-7bf8-4ae9-9a4c-5f95a36ee82c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-453160] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"8c2e6960-306c-449f-b763-eb4dafa5101a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"ecd536c3-2158-4958-89d9-e4ac00469814","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d7d0dc64-1fed-4dcf-951c-d8639f2b8161","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig"}}
	{"specversion":"1.0","id":"1437b453-f001-4244-92df-8574db8ea32e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube"}}
	{"specversion":"1.0","id":"9b3e9923-32b7-4cb1-a5dc-1d6fbc5c392c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"3b1e27a2-3adb-4114-ab9d-7b8b840be187","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c965dfa6-7278-4e03-82f3-106997346c4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-453160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-453160
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (25.56s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-535787 --network=
E1124 13:42:07.499524  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-535787 --network=: (23.44823604s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-535787" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-535787
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-535787: (2.093271776s)
--- PASS: TestKicCustomNetwork/create_custom_network (25.56s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.96s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-213851 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-213851 --network=bridge: (23.977672849s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-213851" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-213851
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-213851: (1.964292031s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.96s)

                                                
                                    
x
+
TestKicExistingNetwork (22.16s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1124 13:42:44.047296  351593 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1124 13:42:44.062571  351593 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1124 13:42:44.062649  351593 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1124 13:42:44.062678  351593 cli_runner.go:164] Run: docker network inspect existing-network
W1124 13:42:44.077594  351593 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1124 13:42:44.077620  351593 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1124 13:42:44.077634  351593 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1124 13:42:44.077780  351593 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1124 13:42:44.093524  351593 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-d51e7dfe1049 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:4a:86:1b:17:16:ff} reservation:<nil>}
I1124 13:42:44.093929  351593 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001c09620}
I1124 13:42:44.093961  351593 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1124 13:42:44.094019  351593 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1124 13:42:44.139348  351593 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-373525 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-373525 --network=existing-network: (20.097879269s)
helpers_test.go:175: Cleaning up "existing-network-373525" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-373525
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-373525: (1.935412977s)
I1124 13:43:06.190337  351593 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.16s)

                                                
                                    
x
+
TestKicCustomSubnet (26.35s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-701055 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-701055 --subnet=192.168.60.0/24: (24.24819311s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-701055 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-701055" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-701055
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-701055: (2.082739279s)
--- PASS: TestKicCustomSubnet (26.35s)

                                                
                                    
x
+
TestKicStaticIP (27.09s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-106850 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-106850 --static-ip=192.168.200.200: (24.866936939s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-106850 ip
helpers_test.go:175: Cleaning up "static-ip-106850" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-106850
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-106850: (2.082738552s)
--- PASS: TestKicStaticIP (27.09s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.67s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-528197 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-528197 --driver=docker  --container-runtime=crio: (22.036334291s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-530160 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-530160 --driver=docker  --container-runtime=crio: (20.863306945s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-528197
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-530160
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-530160" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-530160
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-530160: (2.286917945s)
helpers_test.go:175: Cleaning up "first-528197" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-528197
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-528197: (2.279477217s)
--- PASS: TestMinikubeProfile (48.67s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (4.71s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-046577 --memory=3072 --mount-string /tmp/TestMountStartserial2102438603/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-046577 --memory=3072 --mount-string /tmp/TestMountStartserial2102438603/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.705815017s)
--- PASS: TestMountStart/serial/StartWithMountFirst (4.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-046577 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.6s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-066595 --memory=3072 --mount-string /tmp/TestMountStartserial2102438603/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-066595 --memory=3072 --mount-string /tmp/TestMountStartserial2102438603/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.603037153s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.60s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-066595 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-046577 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-046577 --alsologtostderr -v=5: (1.651135662s)
--- PASS: TestMountStart/serial/DeleteFirst (1.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-066595 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.25s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-066595
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-066595: (1.249913823s)
--- PASS: TestMountStart/serial/Stop (1.25s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.23s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-066595
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-066595: (6.227169658s)
--- PASS: TestMountStart/serial/RestartStopped (7.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-066595 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (89.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-979706 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1124 13:45:51.724195  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 13:46:39.788974  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-979706 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m29.502654083s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (89.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-979706 -- rollout status deployment/busybox: (1.728597009s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-4snhr -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-z7b2q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-4snhr -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-z7b2q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-4snhr -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-z7b2q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.09s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-4snhr -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-4snhr -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-z7b2q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-979706 -- exec busybox-7b57f96db7-z7b2q -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.69s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (23.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-979706 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-979706 -v=5 --alsologtostderr: (22.777017707s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (23.43s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-979706 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.64s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.61s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp testdata/cp-test.txt multinode-979706:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2836314928/001/cp-test_multinode-979706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706:/home/docker/cp-test.txt multinode-979706-m02:/home/docker/cp-test_multinode-979706_multinode-979706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m02 "sudo cat /home/docker/cp-test_multinode-979706_multinode-979706-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706:/home/docker/cp-test.txt multinode-979706-m03:/home/docker/cp-test_multinode-979706_multinode-979706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m03 "sudo cat /home/docker/cp-test_multinode-979706_multinode-979706-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp testdata/cp-test.txt multinode-979706-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2836314928/001/cp-test_multinode-979706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706-m02:/home/docker/cp-test.txt multinode-979706:/home/docker/cp-test_multinode-979706-m02_multinode-979706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706 "sudo cat /home/docker/cp-test_multinode-979706-m02_multinode-979706.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706-m02:/home/docker/cp-test.txt multinode-979706-m03:/home/docker/cp-test_multinode-979706-m02_multinode-979706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m03 "sudo cat /home/docker/cp-test_multinode-979706-m02_multinode-979706-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp testdata/cp-test.txt multinode-979706-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile2836314928/001/cp-test_multinode-979706-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706-m03:/home/docker/cp-test.txt multinode-979706:/home/docker/cp-test_multinode-979706-m03_multinode-979706.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706 "sudo cat /home/docker/cp-test_multinode-979706-m03_multinode-979706.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 cp multinode-979706-m03:/home/docker/cp-test.txt multinode-979706-m02:/home/docker/cp-test_multinode-979706-m03_multinode-979706-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 ssh -n multinode-979706-m02 "sudo cat /home/docker/cp-test_multinode-979706-m03_multinode-979706-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.61s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-979706 node stop m03: (1.257088076s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-979706 status: exit status 7 (478.521006ms)

                                                
                                                
-- stdout --
	multinode-979706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-979706-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-979706-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr: exit status 7 (487.536485ms)

                                                
                                                
-- stdout --
	multinode-979706
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-979706-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-979706-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:47:23.000202  490442 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:47:23.000464  490442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:23.000478  490442 out.go:374] Setting ErrFile to fd 2...
	I1124 13:47:23.000485  490442 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:47:23.000730  490442 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:47:23.000927  490442 out.go:368] Setting JSON to false
	I1124 13:47:23.000958  490442 mustload.go:66] Loading cluster: multinode-979706
	I1124 13:47:23.001029  490442 notify.go:221] Checking for updates...
	I1124 13:47:23.001322  490442 config.go:182] Loaded profile config "multinode-979706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:47:23.001343  490442 status.go:174] checking status of multinode-979706 ...
	I1124 13:47:23.001848  490442 cli_runner.go:164] Run: docker container inspect multinode-979706 --format={{.State.Status}}
	I1124 13:47:23.020769  490442 status.go:371] multinode-979706 host status = "Running" (err=<nil>)
	I1124 13:47:23.020794  490442 host.go:66] Checking if "multinode-979706" exists ...
	I1124 13:47:23.021142  490442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979706
	I1124 13:47:23.038432  490442 host.go:66] Checking if "multinode-979706" exists ...
	I1124 13:47:23.038637  490442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:23.038681  490442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979706
	I1124 13:47:23.054761  490442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33278 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/multinode-979706/id_rsa Username:docker}
	I1124 13:47:23.152856  490442 ssh_runner.go:195] Run: systemctl --version
	I1124 13:47:23.158830  490442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:47:23.170290  490442 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:47:23.225137  490442 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-24 13:47:23.2159368 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x86
_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:47:23.225659  490442 kubeconfig.go:125] found "multinode-979706" server: "https://192.168.67.2:8443"
	I1124 13:47:23.225688  490442 api_server.go:166] Checking apiserver status ...
	I1124 13:47:23.225728  490442 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1124 13:47:23.237137  490442 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup
	W1124 13:47:23.244947  490442 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1224/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1124 13:47:23.244983  490442 ssh_runner.go:195] Run: ls
	I1124 13:47:23.248428  490442 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1124 13:47:23.253222  490442 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1124 13:47:23.253240  490442 status.go:463] multinode-979706 apiserver status = Running (err=<nil>)
	I1124 13:47:23.253249  490442 status.go:176] multinode-979706 status: &{Name:multinode-979706 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:47:23.253264  490442 status.go:174] checking status of multinode-979706-m02 ...
	I1124 13:47:23.253482  490442 cli_runner.go:164] Run: docker container inspect multinode-979706-m02 --format={{.State.Status}}
	I1124 13:47:23.269944  490442 status.go:371] multinode-979706-m02 host status = "Running" (err=<nil>)
	I1124 13:47:23.269963  490442 host.go:66] Checking if "multinode-979706-m02" exists ...
	I1124 13:47:23.270189  490442 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-979706-m02
	I1124 13:47:23.284553  490442 host.go:66] Checking if "multinode-979706-m02" exists ...
	I1124 13:47:23.284809  490442 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1124 13:47:23.284846  490442 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-979706-m02
	I1124 13:47:23.302362  490442 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33283 SSHKeyPath:/home/jenkins/minikube-integration/21932-348000/.minikube/machines/multinode-979706-m02/id_rsa Username:docker}
	I1124 13:47:23.398365  490442 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1124 13:47:23.409637  490442 status.go:176] multinode-979706-m02 status: &{Name:multinode-979706-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:47:23.409664  490442 status.go:174] checking status of multinode-979706-m03 ...
	I1124 13:47:23.409949  490442 cli_runner.go:164] Run: docker container inspect multinode-979706-m03 --format={{.State.Status}}
	I1124 13:47:23.426420  490442 status.go:371] multinode-979706-m03 host status = "Stopped" (err=<nil>)
	I1124 13:47:23.426436  490442 status.go:384] host is not running, skipping remaining checks
	I1124 13:47:23.426443  490442 status.go:176] multinode-979706-m03 status: &{Name:multinode-979706-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.22s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-979706 node start m03 -v=5 --alsologtostderr: (6.437886825s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.13s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-979706
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-979706
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-979706: (31.780333672s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-979706 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-979706 --wait=true -v=5 --alsologtostderr: (50.970835899s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-979706
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-979706 node delete m03: (4.586284347s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-979706 stop: (28.300170362s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-979706 status: exit status 7 (93.98158ms)

                                                
                                                
-- stdout --
	multinode-979706
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-979706-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr: exit status 7 (95.451064ms)

                                                
                                                
-- stdout --
	multinode-979706
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-979706-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:49:27.099123  500273 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:49:27.099683  500273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:49:27.099704  500273 out.go:374] Setting ErrFile to fd 2...
	I1124 13:49:27.099712  500273 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:49:27.100452  500273 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:49:27.100708  500273 out.go:368] Setting JSON to false
	I1124 13:49:27.100747  500273 mustload.go:66] Loading cluster: multinode-979706
	I1124 13:49:27.100793  500273 notify.go:221] Checking for updates...
	I1124 13:49:27.101272  500273 config.go:182] Loaded profile config "multinode-979706": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:49:27.101301  500273 status.go:174] checking status of multinode-979706 ...
	I1124 13:49:27.101825  500273 cli_runner.go:164] Run: docker container inspect multinode-979706 --format={{.State.Status}}
	I1124 13:49:27.120226  500273 status.go:371] multinode-979706 host status = "Stopped" (err=<nil>)
	I1124 13:49:27.120244  500273 status.go:384] host is not running, skipping remaining checks
	I1124 13:49:27.120251  500273 status.go:176] multinode-979706 status: &{Name:multinode-979706 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1124 13:49:27.120273  500273 status.go:174] checking status of multinode-979706-m02 ...
	I1124 13:49:27.120503  500273 cli_runner.go:164] Run: docker container inspect multinode-979706-m02 --format={{.State.Status}}
	I1124 13:49:27.137518  500273 status.go:371] multinode-979706-m02 host status = "Stopped" (err=<nil>)
	I1124 13:49:27.137535  500273 status.go:384] host is not running, skipping remaining checks
	I1124 13:49:27.137543  500273 status.go:176] multinode-979706-m02 status: &{Name:multinode-979706-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.49s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (24.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-979706 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-979706 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (23.803897816s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-979706 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (24.39s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (22.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-979706
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-979706-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-979706-m02 --driver=docker  --container-runtime=crio: exit status 14 (73.847772ms)

                                                
                                                
-- stdout --
	* [multinode-979706-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-979706-m02' is duplicated with machine name 'multinode-979706-m02' in profile 'multinode-979706'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-979706-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-979706-m03 --driver=docker  --container-runtime=crio: (19.357134885s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-979706
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-979706: exit status 80 (284.548284ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-979706 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-979706-m03 already exists in multinode-979706-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-979706-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-979706-m03: (2.316529836s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (22.09s)

                                                
                                    
x
+
TestPreload (101.82s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-341955 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1124 13:50:51.723331  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-341955 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.548424189s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-341955 image pull gcr.io/k8s-minikube/busybox
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-341955
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-341955: (5.865160707s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-341955 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1124 13:51:39.790568  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-341955 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (47.024418148s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-341955 image list
helpers_test.go:175: Cleaning up "test-preload-341955" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-341955
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-341955: (2.316252633s)
--- PASS: TestPreload (101.82s)

                                                
                                    
x
+
TestScheduledStopUnix (96.2s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-866823 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-866823 --memory=3072 --driver=docker  --container-runtime=crio: (20.563072371s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-866823 --schedule 5m -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:52:20.137333  517122 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:52:20.137444  517122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:52:20.137453  517122 out.go:374] Setting ErrFile to fd 2...
	I1124 13:52:20.137456  517122 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:52:20.137653  517122 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:52:20.137876  517122 out.go:368] Setting JSON to false
	I1124 13:52:20.137988  517122 mustload.go:66] Loading cluster: scheduled-stop-866823
	I1124 13:52:20.138298  517122 config.go:182] Loaded profile config "scheduled-stop-866823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:52:20.138366  517122 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/config.json ...
	I1124 13:52:20.138531  517122 mustload.go:66] Loading cluster: scheduled-stop-866823
	I1124 13:52:20.138636  517122 config.go:182] Loaded profile config "scheduled-stop-866823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-866823 -n scheduled-stop-866823
scheduled_stop_test.go:172: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:52:20.516456  517271 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:52:20.516715  517271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:52:20.516725  517271 out.go:374] Setting ErrFile to fd 2...
	I1124 13:52:20.516730  517271 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:52:20.516929  517271 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:52:20.517175  517271 out.go:368] Setting JSON to false
	I1124 13:52:20.517353  517271 daemonize_unix.go:73] killing process 517157 as it is an old scheduled stop
	I1124 13:52:20.517451  517271 mustload.go:66] Loading cluster: scheduled-stop-866823
	I1124 13:52:20.517844  517271 config.go:182] Loaded profile config "scheduled-stop-866823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:52:20.517946  517271 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/config.json ...
	I1124 13:52:20.518165  517271 mustload.go:66] Loading cluster: scheduled-stop-866823
	I1124 13:52:20.518289  517271 config.go:182] Loaded profile config "scheduled-stop-866823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
I1124 13:52:20.522805  351593 retry.go:31] will retry after 58.95µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.522943  351593 retry.go:31] will retry after 183.444µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.524104  351593 retry.go:31] will retry after 132.299µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.525237  351593 retry.go:31] will retry after 265.359µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.526366  351593 retry.go:31] will retry after 467.89µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.527487  351593 retry.go:31] will retry after 413.678µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.528607  351593 retry.go:31] will retry after 891.301µs: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.529728  351593 retry.go:31] will retry after 1.420087ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.531940  351593 retry.go:31] will retry after 3.660059ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.536142  351593 retry.go:31] will retry after 5.558811ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.542362  351593 retry.go:31] will retry after 6.985582ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.549615  351593 retry.go:31] will retry after 10.386581ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.560840  351593 retry.go:31] will retry after 18.378784ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.580013  351593 retry.go:31] will retry after 10.632024ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
I1124 13:52:20.591251  351593 retry.go:31] will retry after 39.158667ms: open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-866823 --cancel-scheduled
minikube stop output:

                                                
                                                
-- stdout --
	* All existing scheduled stops cancelled

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-866823 -n scheduled-stop-866823
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-866823
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-866823 --schedule 15s -v=5 --alsologtostderr
minikube stop output:

                                                
                                                
** stderr ** 
	I1124 13:52:46.367552  517833 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:52:46.367653  517833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:52:46.367660  517833 out.go:374] Setting ErrFile to fd 2...
	I1124 13:52:46.367664  517833 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:52:46.367857  517833 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:52:46.368107  517833 out.go:368] Setting JSON to false
	I1124 13:52:46.368183  517833 mustload.go:66] Loading cluster: scheduled-stop-866823
	I1124 13:52:46.368496  517833 config.go:182] Loaded profile config "scheduled-stop-866823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:52:46.368562  517833 profile.go:143] Saving config to /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/scheduled-stop-866823/config.json ...
	I1124 13:52:46.368754  517833 mustload.go:66] Loading cluster: scheduled-stop-866823
	I1124 13:52:46.368851  517833 config.go:182] Loaded profile config "scheduled-stop-866823": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1

                                                
                                                
** /stderr **
scheduled_stop_test.go:172: signal error was:  os: process already finished
E1124 13:53:02.861030  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-866823
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-866823: exit status 7 (80.410964ms)

                                                
                                                
-- stdout --
	scheduled-stop-866823
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-866823 -n scheduled-stop-866823
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-866823 -n scheduled-stop-866823: exit status 7 (76.302721ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-866823" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-866823
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-866823: (4.187707623s)
--- PASS: TestScheduledStopUnix (96.20s)

                                                
                                    
x
+
TestInsufficientStorage (12.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-676419 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-676419 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (9.683109174s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"27586bd1-542c-490f-b719-5624b93e662b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-676419] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"e9f6e3bd-78bb-4d1c-a32b-bb0a31389550","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21932"}}
	{"specversion":"1.0","id":"51c6e9fc-5f24-446b-a9a8-e3e2224d9454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"cd6f24d6-4353-42d7-8c87-420f743fb078","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig"}}
	{"specversion":"1.0","id":"e2dfa24b-dd86-4522-894d-d66d058bfc41","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube"}}
	{"specversion":"1.0","id":"4f5ed9e2-ccbe-4d53-a0a4-e36c37ace231","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"c5eab855-396e-4f36-afd7-c93953e2c183","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"5bce8e51-85e1-4dc4-897e-1cb9c954afe8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"931828ac-f3e2-4af4-9e18-b9ccab325c31","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"7fbf8240-6443-4a6f-b5f7-908fe2ab3eb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"431a5711-5ba9-4fc9-a062-1ed17ec6f6ec","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"40f6cb63-9120-45cf-abee-ab4ca096d971","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-676419\" primary control-plane node in \"insufficient-storage-676419\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"ed8b952c-aad1-45d9-99eb-1783f86db0dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1763789673-21948 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"b8257cc1-ed47-4d61-be98-0a236f7d4f9b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"72308243-5ebc-426f-b3fa-f7e620c9b855","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-676419 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-676419 --output=json --layout=cluster: exit status 7 (281.649768ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-676419","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-676419","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 13:53:45.671728  520352 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-676419" does not appear in /home/jenkins/minikube-integration/21932-348000/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-676419 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-676419 --output=json --layout=cluster: exit status 7 (287.308794ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-676419","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-676419","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1124 13:53:45.959852  520463 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-676419" does not appear in /home/jenkins/minikube-integration/21932-348000/kubeconfig
	E1124 13:53:45.969867  520463 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/insufficient-storage-676419/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-676419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-676419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-676419: (1.85957364s)
--- PASS: TestInsufficientStorage (12.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (44.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.3756266791 start -p running-upgrade-019487 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.3756266791 start -p running-upgrade-019487 --memory=3072 --vm-driver=docker  --container-runtime=crio: (20.564807126s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-019487 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1124 13:56:39.789302  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/functional-334592/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-019487 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (21.101012468s)
helpers_test.go:175: Cleaning up "running-upgrade-019487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-019487
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-019487: (2.429998236s)
--- PASS: TestRunningBinaryUpgrade (44.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (303.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.851864188s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-061040
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-061040: (2.068253012s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-061040 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-061040 status --format={{.Host}}: exit status 7 (87.798823ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m23.560980618s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-061040 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (80.180961ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-061040] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-061040
	    minikube start -p kubernetes-upgrade-061040 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0610402 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-061040 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061040 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.171751917s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-061040" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-061040
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-061040: (2.533719113s)
--- PASS: TestKubernetesUpgrade (303.41s)

                                                
                                    
x
+
TestMissingContainerUpgrade (87.99s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.1007756059 start -p missing-upgrade-119944 --memory=3072 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.1007756059 start -p missing-upgrade-119944 --memory=3072 --driver=docker  --container-runtime=crio: (46.306226622s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-119944
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-119944: (1.679552062s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-119944
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-119944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-119944 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (37.016538761s)
helpers_test.go:175: Cleaning up "missing-upgrade-119944" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-119944
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-119944: (2.387395104s)
--- PASS: TestMissingContainerUpgrade (87.99s)

                                                
                                    
x
+
TestPause/serial/Start (46.32s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-677692 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-677692 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (46.324791213s)
--- PASS: TestPause/serial/Start (46.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940104 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-940104 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (101.204614ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-940104] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (35.01s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940104 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1124 13:53:54.795583  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
no_kubernetes_test.go:120: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940104 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (34.654855602s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-940104 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (35.01s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (92.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.2842121098 start -p stopped-upgrade-040555 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.2842121098 start -p stopped-upgrade-040555 --memory=3072 --vm-driver=docker  --container-runtime=crio: (1m15.827305362s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.2842121098 -p stopped-upgrade-040555 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.2842121098 -p stopped-upgrade-040555 stop: (2.48027821s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-040555 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-040555 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (14.159434392s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (92.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (19.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:137: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.916780743s)
no_kubernetes_test.go:225: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-940104 status -o json
no_kubernetes_test.go:225: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-940104 status -o json: exit status 2 (347.340075ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-940104","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-940104
no_kubernetes_test.go:149: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-940104: (2.294162989s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (19.56s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (7.34s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-677692 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-677692 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (7.327250589s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (7.34s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (4.64s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:161: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:161: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940104 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (4.63949003s)
--- PASS: TestNoKubernetes/serial/Start (4.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: /home/jenkins/minikube-integration/21932-348000/.minikube/cache/linux/amd64/v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-940104 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-940104 "sudo systemctl is-active --quiet service kubelet": exit status 1 (309.848548ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.31s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:194: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:204: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
no_kubernetes_test.go:204: (dbg) Done: out/minikube-linux-amd64 profile list --output=json: (2.384554345s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:183: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-940104
no_kubernetes_test.go:183: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-940104: (1.279760376s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (9.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:216: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-940104 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:216: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-940104 --driver=docker  --container-runtime=crio: (9.082204896s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (9.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:172: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-940104 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-940104 "sudo systemctl is-active --quiet service kubelet": exit status 1 (312.909658ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-040555
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-165759 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-165759 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (176.533012ms)

                                                
                                                
-- stdout --
	* [false-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21932
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1124 13:57:00.680340  570507 out.go:360] Setting OutFile to fd 1 ...
	I1124 13:57:00.680512  570507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:57:00.680527  570507 out.go:374] Setting ErrFile to fd 2...
	I1124 13:57:00.680533  570507 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1124 13:57:00.681120  570507 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21932-348000/.minikube/bin
	I1124 13:57:00.681654  570507 out.go:368] Setting JSON to false
	I1124 13:57:00.682866  570507 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-15","uptime":9568,"bootTime":1763983053,"procs":318,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1044-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1124 13:57:00.682941  570507 start.go:143] virtualization: kvm guest
	I1124 13:57:00.684362  570507 out.go:179] * [false-165759] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1124 13:57:00.685575  570507 out.go:179]   - MINIKUBE_LOCATION=21932
	I1124 13:57:00.685637  570507 notify.go:221] Checking for updates...
	I1124 13:57:00.687646  570507 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1124 13:57:00.688875  570507 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21932-348000/kubeconfig
	I1124 13:57:00.690081  570507 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21932-348000/.minikube
	I1124 13:57:00.696340  570507 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1124 13:57:00.697541  570507 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1124 13:57:00.699206  570507 config.go:182] Loaded profile config "cert-expiration-107341": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:00.699307  570507 config.go:182] Loaded profile config "kubernetes-upgrade-061040": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1124 13:57:00.699405  570507 config.go:182] Loaded profile config "running-upgrade-019487": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1124 13:57:00.699518  570507 driver.go:422] Setting default libvirt URI to qemu:///system
	I1124 13:57:00.725731  570507 docker.go:124] docker version: linux-29.0.3:Docker Engine - Community
	I1124 13:57:00.725885  570507 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1124 13:57:00.785835  570507 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-24 13:57:00.775791455 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1044-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652068352 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-15 Labels:[] ExperimentalBuild:false ServerVersion:29.0.3 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:fcd43222d6b07379a4be9786bda52438f0dd16a1 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[
map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.30.0] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.1] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1124 13:57:00.786001  570507 docker.go:319] overlay module found
	I1124 13:57:00.787736  570507 out.go:179] * Using the docker driver based on user configuration
	I1124 13:57:00.788777  570507 start.go:309] selected driver: docker
	I1124 13:57:00.788793  570507 start.go:927] validating driver "docker" against <nil>
	I1124 13:57:00.788818  570507 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1124 13:57:00.790909  570507 out.go:203] 
	W1124 13:57:00.792821  570507 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1124 13:57:00.794071  570507 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-165759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-165759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-107341
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-061040
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:57:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-019487
contexts:
- context:
cluster: cert-expiration-107341
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-107341
name: cert-expiration-107341
- context:
cluster: kubernetes-upgrade-061040
user: kubernetes-upgrade-061040
name: kubernetes-upgrade-061040
- context:
cluster: running-upgrade-019487
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:57:00 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: running-upgrade-019487
name: running-upgrade-019487
current-context: running-upgrade-019487
kind: Config
users:
- name: cert-expiration-107341
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/cert-expiration-107341/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/cert-expiration-107341/client.key
- name: kubernetes-upgrade-061040
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/client.key
- name: running-upgrade-019487
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/running-upgrade-019487/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/running-upgrade-019487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-165759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-165759"

                                                
                                                
----------------------- debugLogs end: false-165759 [took: 3.334633498s] --------------------------------
helpers_test.go:175: Cleaning up "false-165759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-165759
--- PASS: TestNetworkPlugins/group/false (3.69s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (50.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (50.87914113s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (50.88s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (50.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.207309556s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (50.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (7.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-551674 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [e3735245-8e28-4de0-a437-3a6f28002f38] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [e3735245-8e28-4de0-a437-3a6f28002f38] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 7.003300419s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-551674 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (7.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-495729 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [bf3a1272-92ff-45db-ba2f-8e360dd19c97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [bf3a1272-92ff-45db-ba2f-8e360dd19c97] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.002755485s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-495729 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-551674 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-551674 --alsologtostderr -v=3: (15.980482288s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (15.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (18.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-495729 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-495729 --alsologtostderr -v=3: (18.089654558s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (18.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674: exit status 7 (79.610329ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-551674 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-551674 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.376746489s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-551674 -n old-k8s-version-551674
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.72s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729: exit status 7 (77.343573ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-495729 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (25.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-495729 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (25.128854121s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-495729 -n no-preload-495729
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (25.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xvfgk" [885596b0-37d2-4c9a-9577-ac17e3e35b79] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xvfgk" [885596b0-37d2-4c9a-9577-ac17e3e35b79] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 7.002817388s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (7.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (67.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (1m7.089117285s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (67.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-xvfgk" [885596b0-37d2-4c9a-9577-ac17e3e35b79] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003264083s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-495729 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-495729 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lfgtw" [a793b031-3ed9-4323-be38-0ae496db715b] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002918128s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.63s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (42.634122328s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (42.63s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-lfgtw" [a793b031-3ed9-4323-be38-0ae496db715b] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003870871s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-551674 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-551674 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.548018222s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [0cbb62f6-2583-44e5-8c7f-99a32975fb68] Pending
helpers_test.go:352: "busybox" [0cbb62f6-2583-44e5-8c7f-99a32975fb68] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [0cbb62f6-2583-44e5-8c7f-99a32975fb68] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.004340416s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-305966 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-305966 --alsologtostderr -v=3: (2.558020689s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-456660 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [de501807-9ee9-4a20-982b-0c68a8f2a4a7] Pending
helpers_test.go:352: "busybox" [de501807-9ee9-4a20-982b-0c68a8f2a4a7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [de501807-9ee9-4a20-982b-0c68a8f2a4a7] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003737428s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-456660 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.23s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966: exit status 7 (78.387458ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-305966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (11.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-305966 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (11.482297662s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-305966 -n newest-cni-305966
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (11.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (38.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (38.791825588s)
--- PASS: TestNetworkPlugins/group/auto/Start (38.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (16.47s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-098307 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-098307 --alsologtostderr -v=3: (16.46860701s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (16.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-456660 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-456660 --alsologtostderr -v=3: (16.297592727s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-305966 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307: exit status 7 (92.805006ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-098307 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.39s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-098307 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.018496738s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-098307 -n default-k8s-diff-port-098307
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (45.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (45.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (45.966925528s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (45.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660: exit status 7 (100.987427ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-456660 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (48.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-456660 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.602573534s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-456660 -n embed-certs-456660
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (48.94s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-165759 "pgrep -a kubelet"
I1124 14:00:45.729425  351593 config.go:182] Loaded profile config "auto-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-165759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-hfw7c" [a65deb45-944c-4b9f-87f3-3e9bbd5fb1f8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-hfw7c" [a65deb45-944c-4b9f-87f3-3e9bbd5fb1f8] Running
E1124 14:00:51.723452  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/addons-715644/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.003846507s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.08s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wqmwj" [6c8af29b-f744-4b39-94fa-3b71fa5188ee] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003396808s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-jjp5h" [c90e9b9f-53d9-4050-b901-407a10ece7db] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.004481511s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (52.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (52.379294147s)
--- PASS: TestNetworkPlugins/group/calico/Start (52.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-wqmwj" [6c8af29b-f744-4b39-94fa-3b71fa5188ee] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002981975s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-098307 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-165759 "pgrep -a kubelet"
I1124 14:01:18.247147  351593 config.go:182] Loaded profile config "kindnet-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-165759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-gbg7f" [c1d86763-915d-45cc-89bd-deb70fb9cad9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-gbg7f" [c1d86763-915d-45cc-89bd-deb70fb9cad9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003965706s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.20s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dz7pl" [33dedc32-bdd3-4165-8073-e3dbc2ea8c16] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005750627s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-098307 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-dz7pl" [33dedc32-bdd3-4165-8073-e3dbc2ea8c16] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003463343s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-456660 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (50.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (50.566904932s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (50.57s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-456660 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (66.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m6.12174001s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (66.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (56.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (56.629874046s)
--- PASS: TestNetworkPlugins/group/flannel/Start (56.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-rp6p6" [3defbe4f-939e-4b74-8b77-9546cc7639c2] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:352: "calico-node-rp6p6" [3defbe4f-939e-4b74-8b77-9546cc7639c2] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005655407s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-165759 "pgrep -a kubelet"
I1124 14:02:14.767293  351593 config.go:182] Loaded profile config "calico-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-165759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-d5ts2" [6200c4fe-025d-4bfd-a93b-421446d43e46] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-d5ts2" [6200c4fe-025d-4bfd-a93b-421446d43e46] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 8.005297571s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-165759 "pgrep -a kubelet"
I1124 14:02:21.779236  351593 config.go:182] Loaded profile config "custom-flannel-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-165759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-xvkkw" [81ae689b-4edf-4369-9ac8-dc96f9e5a0a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-xvkkw" [81ae689b-4edf-4369-9ac8-dc96f9e5a0a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.003861205s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (68.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-165759 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m8.96956613s)
--- PASS: TestNetworkPlugins/group/bridge/Start (68.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-zdqsk" [4b614236-6830-4b76-9d96-f5b152e1de75] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.004071005s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-165759 "pgrep -a kubelet"
I1124 14:02:49.384418  351593 config.go:182] Loaded profile config "enable-default-cni-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-165759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fxfjx" [769ae02d-a38e-4bae-b3a9-81c43a8c2d12] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fxfjx" [769ae02d-a38e-4bae-b3a9-81c43a8c2d12] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 9.003065763s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (9.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-165759 "pgrep -a kubelet"
E1124 14:02:54.132984  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:54.139442  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:54.151166  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
I1124 14:02:54.165995  351593 config.go:182] Loaded profile config "flannel-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-165759 replace --force -f testdata/netcat-deployment.yaml
E1124 14:02:54.172797  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:54.214251  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:54.295592  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-55nmd" [93aec765-e711-4e43-a13a-44c8ca89009a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1124 14:02:54.457188  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:54.779317  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:55.421140  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1124 14:02:56.702605  351593 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/old-k8s-version-551674/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:352: "netcat-cd4db9dbf-55nmd" [93aec765-e711-4e43-a13a-44c8ca89009a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.003976561s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-165759 "pgrep -a kubelet"
I1124 14:03:52.296355  351593 config.go:182] Loaded profile config "bridge-165759": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-165759 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7ghbq" [e5fbd518-7225-4e3e-bbe3-15ec600e8481] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7ghbq" [e5fbd518-7225-4e3e-bbe3-15ec600e8481] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 9.003584779s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (9.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-165759 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-165759 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.08s)

                                                
                                    

Test skip (27/328)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-036543" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-036543
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:615: 
----------------------- debugLogs start: kubenet-165759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-165759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-107341
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-061040
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:56:43 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.103.2:8443
name: running-upgrade-019487
contexts:
- context:
cluster: cert-expiration-107341
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-107341
name: cert-expiration-107341
- context:
cluster: kubernetes-upgrade-061040
user: kubernetes-upgrade-061040
name: kubernetes-upgrade-061040
- context:
cluster: running-upgrade-019487
user: running-upgrade-019487
name: running-upgrade-019487
current-context: ""
kind: Config
users:
- name: cert-expiration-107341
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/cert-expiration-107341/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/cert-expiration-107341/client.key
- name: kubernetes-upgrade-061040
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/client.key
- name: running-upgrade-019487
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/running-upgrade-019487/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/running-upgrade-019487/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-165759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-165759"

                                                
                                                
----------------------- debugLogs end: kubenet-165759 [took: 3.42064668s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-165759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-165759
--- SKIP: TestNetworkPlugins/group/kubenet (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.86s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-165759 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-165759" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.85.2:8443
name: cert-expiration-107341
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21932-348000/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:46 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: kubernetes-upgrade-061040
contexts:
- context:
cluster: cert-expiration-107341
extensions:
- extension:
last-update: Mon, 24 Nov 2025 13:55:45 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: cert-expiration-107341
name: cert-expiration-107341
- context:
cluster: kubernetes-upgrade-061040
user: kubernetes-upgrade-061040
name: kubernetes-upgrade-061040
current-context: ""
kind: Config
users:
- name: cert-expiration-107341
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/cert-expiration-107341/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/cert-expiration-107341/client.key
- name: kubernetes-upgrade-061040
user:
client-certificate: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/client.crt
client-key: /home/jenkins/minikube-integration/21932-348000/.minikube/profiles/kubernetes-upgrade-061040/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-165759

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-165759" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-165759"

                                                
                                                
----------------------- debugLogs end: cilium-165759 [took: 5.678805455s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-165759" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-165759
--- SKIP: TestNetworkPlugins/group/cilium (5.86s)

                                                
                                    
Copied to clipboard